Jul 2 06:51:54.807813 kernel: Linux version 6.1.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 23:29:55 -00 2024 Jul 2 06:51:54.807834 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 06:51:54.807846 kernel: BIOS-provided physical RAM map: Jul 2 06:51:54.807853 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 06:51:54.807860 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 06:51:54.807867 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 06:51:54.807876 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdcfff] usable Jul 2 06:51:54.807883 kernel: BIOS-e820: [mem 0x000000009cfdd000-0x000000009cffffff] reserved Jul 2 06:51:54.807891 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 2 06:51:54.807900 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 06:51:54.807907 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 2 06:51:54.807914 kernel: NX (Execute Disable) protection: active Jul 2 06:51:54.807921 kernel: SMBIOS 2.8 present. Jul 2 06:51:54.807929 kernel: DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 2 06:51:54.807938 kernel: Hypervisor detected: KVM Jul 2 06:51:54.807948 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 06:51:54.807955 kernel: kvm-clock: using sched offset of 3101478529 cycles Jul 2 06:51:54.807964 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 06:51:54.807972 kernel: tsc: Detected 2794.748 MHz processor Jul 2 06:51:54.807980 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 06:51:54.807988 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 06:51:54.807996 kernel: last_pfn = 0x9cfdd max_arch_pfn = 0x400000000 Jul 2 06:51:54.808004 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 06:51:54.808013 kernel: Using GB pages for direct mapping Jul 2 06:51:54.808021 kernel: ACPI: Early table checksum verification disabled Jul 2 06:51:54.808029 kernel: ACPI: RSDP 0x00000000000F59C0 000014 (v00 BOCHS ) Jul 2 06:51:54.808037 kernel: ACPI: RSDT 0x000000009CFE1BDD 000034 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:51:54.808045 kernel: ACPI: FACP 0x000000009CFE1A79 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:51:54.808053 kernel: ACPI: DSDT 0x000000009CFE0040 001A39 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:51:54.808061 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 2 06:51:54.808069 kernel: ACPI: APIC 0x000000009CFE1AED 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:51:54.808077 kernel: ACPI: HPET 0x000000009CFE1B7D 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:51:54.808086 kernel: ACPI: WAET 0x000000009CFE1BB5 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 2 06:51:54.808094 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe1a79-0x9cfe1aec] Jul 2 06:51:54.808102 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe1a78] Jul 2 06:51:54.808110 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 2 06:51:54.808118 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe1aed-0x9cfe1b7c] Jul 2 06:51:54.808126 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe1b7d-0x9cfe1bb4] Jul 2 06:51:54.808134 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe1bb5-0x9cfe1bdc] Jul 2 06:51:54.808142 kernel: No NUMA configuration found Jul 2 06:51:54.808155 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdcfff] Jul 2 06:51:54.808163 kernel: NODE_DATA(0) allocated [mem 0x9cfd7000-0x9cfdcfff] Jul 2 06:51:54.808172 kernel: Zone ranges: Jul 2 06:51:54.808181 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 06:51:54.808189 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdcfff] Jul 2 06:51:54.808198 kernel: Normal empty Jul 2 06:51:54.808206 kernel: Movable zone start for each node Jul 2 06:51:54.808216 kernel: Early memory node ranges Jul 2 06:51:54.808224 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 06:51:54.808237 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdcfff] Jul 2 06:51:54.808246 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdcfff] Jul 2 06:51:54.808254 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 06:51:54.808263 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 06:51:54.808271 kernel: On node 0, zone DMA32: 12323 pages in unavailable ranges Jul 2 06:51:54.808280 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 2 06:51:54.808288 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 06:51:54.808299 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 06:51:54.808307 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 2 06:51:54.808316 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 06:51:54.808325 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 06:51:54.808346 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 06:51:54.808355 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 06:51:54.808364 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 06:51:54.808372 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 06:51:54.808381 kernel: TSC deadline timer available Jul 2 06:51:54.808391 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 2 06:51:54.808399 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 2 06:51:54.808408 kernel: kvm-guest: setup PV sched yield Jul 2 06:51:54.808416 kernel: [mem 0x9d000000-0xfeffbfff] available for PCI devices Jul 2 06:51:54.808425 kernel: Booting paravirtualized kernel on KVM Jul 2 06:51:54.808433 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 06:51:54.808442 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 2 06:51:54.808450 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u524288 Jul 2 06:51:54.808459 kernel: pcpu-alloc: s194792 r8192 d30488 u524288 alloc=1*2097152 Jul 2 06:51:54.808469 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 2 06:51:54.808477 kernel: kvm-guest: PV spinlocks enabled Jul 2 06:51:54.808486 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 06:51:54.808494 kernel: Fallback order for Node 0: 0 Jul 2 06:51:54.808503 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632733 Jul 2 06:51:54.808511 kernel: Policy zone: DMA32 Jul 2 06:51:54.808521 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 06:51:54.808530 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 06:51:54.808540 kernel: random: crng init done Jul 2 06:51:54.808549 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 06:51:54.808557 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 06:51:54.808566 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 06:51:54.808575 kernel: Memory: 2430544K/2571756K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 140952K reserved, 0K cma-reserved) Jul 2 06:51:54.808584 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 2 06:51:54.808592 kernel: ftrace: allocating 36081 entries in 141 pages Jul 2 06:51:54.808601 kernel: ftrace: allocated 141 pages with 4 groups Jul 2 06:51:54.808609 kernel: Dynamic Preempt: voluntary Jul 2 06:51:54.808619 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 06:51:54.808628 kernel: rcu: RCU event tracing is enabled. Jul 2 06:51:54.808637 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 2 06:51:54.808646 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 06:51:54.808654 kernel: Rude variant of Tasks RCU enabled. Jul 2 06:51:54.808663 kernel: Tracing variant of Tasks RCU enabled. Jul 2 06:51:54.808671 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 06:51:54.808680 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 2 06:51:54.808688 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 2 06:51:54.808698 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 06:51:54.808707 kernel: Console: colour VGA+ 80x25 Jul 2 06:51:54.808715 kernel: printk: console [ttyS0] enabled Jul 2 06:51:54.808724 kernel: ACPI: Core revision 20220331 Jul 2 06:51:54.808740 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 2 06:51:54.808752 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 06:51:54.808761 kernel: x2apic enabled Jul 2 06:51:54.808769 kernel: Switched APIC routing to physical x2apic. Jul 2 06:51:54.808778 kernel: kvm-guest: setup PV IPIs Jul 2 06:51:54.808786 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 2 06:51:54.808797 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 2 06:51:54.808805 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 2 06:51:54.808814 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 2 06:51:54.808823 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 2 06:51:54.808831 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 2 06:51:54.808840 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 06:51:54.808849 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 06:51:54.808858 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 06:51:54.808874 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 06:51:54.808883 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 2 06:51:54.808892 kernel: RETBleed: Mitigation: untrained return thunk Jul 2 06:51:54.808903 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 06:51:54.808912 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 2 06:51:54.808921 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 06:51:54.808929 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 06:51:54.808938 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 06:51:54.808947 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 06:51:54.808958 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 2 06:51:54.808967 kernel: Freeing SMP alternatives memory: 32K Jul 2 06:51:54.808976 kernel: pid_max: default: 32768 minimum: 301 Jul 2 06:51:54.808985 kernel: LSM: Security Framework initializing Jul 2 06:51:54.808994 kernel: SELinux: Initializing. Jul 2 06:51:54.809003 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 06:51:54.809013 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 06:51:54.809024 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 2 06:51:54.809036 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 06:51:54.809047 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jul 2 06:51:54.809056 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 06:51:54.809066 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jul 2 06:51:54.809075 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 06:51:54.809084 kernel: cblist_init_generic: Setting shift to 2 and lim to 1. Jul 2 06:51:54.809093 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 2 06:51:54.809102 kernel: ... version: 0 Jul 2 06:51:54.809111 kernel: ... bit width: 48 Jul 2 06:51:54.809120 kernel: ... generic registers: 6 Jul 2 06:51:54.809131 kernel: ... value mask: 0000ffffffffffff Jul 2 06:51:54.809140 kernel: ... max period: 00007fffffffffff Jul 2 06:51:54.809149 kernel: ... fixed-purpose events: 0 Jul 2 06:51:54.809157 kernel: ... event mask: 000000000000003f Jul 2 06:51:54.809166 kernel: signal: max sigframe size: 1776 Jul 2 06:51:54.809175 kernel: rcu: Hierarchical SRCU implementation. Jul 2 06:51:54.809184 kernel: rcu: Max phase no-delay instances is 400. Jul 2 06:51:54.809193 kernel: smp: Bringing up secondary CPUs ... Jul 2 06:51:54.809201 kernel: x86: Booting SMP configuration: Jul 2 06:51:54.809212 kernel: .... node #0, CPUs: #1 #2 #3 Jul 2 06:51:54.809220 kernel: smp: Brought up 1 node, 4 CPUs Jul 2 06:51:54.809229 kernel: smpboot: Max logical packages: 1 Jul 2 06:51:54.809238 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 2 06:51:54.809247 kernel: devtmpfs: initialized Jul 2 06:51:54.809256 kernel: x86/mm: Memory block size: 128MB Jul 2 06:51:54.809265 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 06:51:54.809274 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 2 06:51:54.809283 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 06:51:54.809294 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 06:51:54.809303 kernel: audit: initializing netlink subsys (disabled) Jul 2 06:51:54.809312 kernel: audit: type=2000 audit(1719903115.192:1): state=initialized audit_enabled=0 res=1 Jul 2 06:51:54.809324 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 06:51:54.809399 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 06:51:54.809408 kernel: cpuidle: using governor menu Jul 2 06:51:54.809417 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 06:51:54.809425 kernel: dca service started, version 1.12.1 Jul 2 06:51:54.809434 kernel: PCI: Using configuration type 1 for base access Jul 2 06:51:54.809446 kernel: PCI: Using configuration type 1 for extended access Jul 2 06:51:54.809455 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 06:51:54.809464 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 06:51:54.809473 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 06:51:54.809482 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 06:51:54.809491 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 06:51:54.809499 kernel: ACPI: Added _OSI(Module Device) Jul 2 06:51:54.809508 kernel: ACPI: Added _OSI(Processor Device) Jul 2 06:51:54.809517 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 06:51:54.809529 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 06:51:54.809538 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 06:51:54.809547 kernel: ACPI: Interpreter enabled Jul 2 06:51:54.809555 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 06:51:54.809564 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 06:51:54.809573 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 06:51:54.809582 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 06:51:54.809591 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jul 2 06:51:54.809600 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 06:51:54.809781 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 06:51:54.809797 kernel: acpiphp: Slot [3] registered Jul 2 06:51:54.809806 kernel: acpiphp: Slot [4] registered Jul 2 06:51:54.809815 kernel: acpiphp: Slot [5] registered Jul 2 06:51:54.809824 kernel: acpiphp: Slot [6] registered Jul 2 06:51:54.809833 kernel: acpiphp: Slot [7] registered Jul 2 06:51:54.809841 kernel: acpiphp: Slot [8] registered Jul 2 06:51:54.809850 kernel: acpiphp: Slot [9] registered Jul 2 06:51:54.809862 kernel: acpiphp: Slot [10] registered Jul 2 06:51:54.809870 kernel: acpiphp: Slot [11] registered Jul 2 06:51:54.809879 kernel: acpiphp: Slot [12] registered Jul 2 06:51:54.809888 kernel: acpiphp: Slot [13] registered Jul 2 06:51:54.809896 kernel: acpiphp: Slot [14] registered Jul 2 06:51:54.809905 kernel: acpiphp: Slot [15] registered Jul 2 06:51:54.809914 kernel: acpiphp: Slot [16] registered Jul 2 06:51:54.809922 kernel: acpiphp: Slot [17] registered Jul 2 06:51:54.809931 kernel: acpiphp: Slot [18] registered Jul 2 06:51:54.809940 kernel: acpiphp: Slot [19] registered Jul 2 06:51:54.809950 kernel: acpiphp: Slot [20] registered Jul 2 06:51:54.809959 kernel: acpiphp: Slot [21] registered Jul 2 06:51:54.809968 kernel: acpiphp: Slot [22] registered Jul 2 06:51:54.809977 kernel: acpiphp: Slot [23] registered Jul 2 06:51:54.809986 kernel: acpiphp: Slot [24] registered Jul 2 06:51:54.809994 kernel: acpiphp: Slot [25] registered Jul 2 06:51:54.810003 kernel: acpiphp: Slot [26] registered Jul 2 06:51:54.810012 kernel: acpiphp: Slot [27] registered Jul 2 06:51:54.810021 kernel: acpiphp: Slot [28] registered Jul 2 06:51:54.810031 kernel: acpiphp: Slot [29] registered Jul 2 06:51:54.810040 kernel: acpiphp: Slot [30] registered Jul 2 06:51:54.810049 kernel: acpiphp: Slot [31] registered Jul 2 06:51:54.810057 kernel: PCI host bridge to bus 0000:00 Jul 2 06:51:54.810172 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 06:51:54.810276 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 06:51:54.810374 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 06:51:54.810457 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xfebfffff window] Jul 2 06:51:54.810542 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jul 2 06:51:54.810623 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 06:51:54.810756 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 06:51:54.810870 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 06:51:54.810994 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jul 2 06:51:54.811086 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc0c0-0xc0cf] Jul 2 06:51:54.811183 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 2 06:51:54.811275 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 2 06:51:54.811402 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 2 06:51:54.811496 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 2 06:51:54.811606 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 06:51:54.811698 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jul 2 06:51:54.811798 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jul 2 06:51:54.811915 kernel: pci 0000:00:02.0: [1234:1111] type 00 class 0x030000 Jul 2 06:51:54.812008 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 2 06:51:54.812107 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 2 06:51:54.812200 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 2 06:51:54.812292 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 06:51:54.812417 kernel: pci 0000:00:03.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 06:51:54.812515 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 06:51:54.812616 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 2 06:51:54.812704 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 2 06:51:54.812827 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 2 06:51:54.812924 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 2 06:51:54.813018 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 2 06:51:54.813110 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 2 06:51:54.813224 kernel: pci 0000:00:05.0: [1af4:1000] type 00 class 0x020000 Jul 2 06:51:54.813323 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc0a0-0xc0bf] Jul 2 06:51:54.813437 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 2 06:51:54.813528 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 2 06:51:54.813622 kernel: pci 0000:00:05.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 2 06:51:54.813635 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 06:51:54.813645 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 06:51:54.813654 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 06:51:54.813664 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 06:51:54.813676 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 06:51:54.813685 kernel: iommu: Default domain type: Translated Jul 2 06:51:54.813695 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 06:51:54.813704 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 06:51:54.813714 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 06:51:54.813723 kernel: PTP clock support registered Jul 2 06:51:54.813739 kernel: PCI: Using ACPI for IRQ routing Jul 2 06:51:54.813748 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 06:51:54.813757 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 06:51:54.813768 kernel: e820: reserve RAM buffer [mem 0x9cfdd000-0x9fffffff] Jul 2 06:51:54.813863 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jul 2 06:51:54.813956 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jul 2 06:51:54.814044 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 06:51:54.814056 kernel: vgaarb: loaded Jul 2 06:51:54.814065 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 2 06:51:54.814074 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 2 06:51:54.814083 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 06:51:54.814095 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 06:51:54.814104 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 06:51:54.814113 kernel: pnp: PnP ACPI init Jul 2 06:51:54.814235 kernel: pnp 00:02: [dma 2] Jul 2 06:51:54.814248 kernel: pnp: PnP ACPI: found 6 devices Jul 2 06:51:54.814258 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 06:51:54.814267 kernel: NET: Registered PF_INET protocol family Jul 2 06:51:54.814275 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 06:51:54.814287 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 06:51:54.814296 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 06:51:54.814305 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 06:51:54.814314 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 2 06:51:54.814323 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 06:51:54.814345 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 06:51:54.814354 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 06:51:54.814363 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 06:51:54.814372 kernel: NET: Registered PF_XDP protocol family Jul 2 06:51:54.814463 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 06:51:54.814544 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 06:51:54.814626 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 06:51:54.814707 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xfebfffff window] Jul 2 06:51:54.814798 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jul 2 06:51:54.814894 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jul 2 06:51:54.814986 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 06:51:54.814999 kernel: PCI: CLS 0 bytes, default 64 Jul 2 06:51:54.815012 kernel: Initialise system trusted keyrings Jul 2 06:51:54.815021 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 06:51:54.815029 kernel: Key type asymmetric registered Jul 2 06:51:54.815038 kernel: Asymmetric key parser 'x509' registered Jul 2 06:51:54.815047 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jul 2 06:51:54.815056 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 06:51:54.815065 kernel: io scheduler mq-deadline registered Jul 2 06:51:54.815073 kernel: io scheduler kyber registered Jul 2 06:51:54.815082 kernel: io scheduler bfq registered Jul 2 06:51:54.815093 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 06:51:54.815102 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 06:51:54.815111 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 06:51:54.815120 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 06:51:54.815129 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 06:51:54.815138 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 06:51:54.815147 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 06:51:54.815156 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 06:51:54.815165 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 06:51:54.815176 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 06:51:54.815279 kernel: rtc_cmos 00:05: RTC can wake from S4 Jul 2 06:51:54.815380 kernel: rtc_cmos 00:05: registered as rtc0 Jul 2 06:51:54.815466 kernel: rtc_cmos 00:05: setting system clock to 2024-07-02T06:51:54 UTC (1719903114) Jul 2 06:51:54.815551 kernel: rtc_cmos 00:05: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 2 06:51:54.815563 kernel: NET: Registered PF_INET6 protocol family Jul 2 06:51:54.815572 kernel: Segment Routing with IPv6 Jul 2 06:51:54.815581 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 06:51:54.815593 kernel: NET: Registered PF_PACKET protocol family Jul 2 06:51:54.815602 kernel: Key type dns_resolver registered Jul 2 06:51:54.815611 kernel: IPI shorthand broadcast: enabled Jul 2 06:51:54.815620 kernel: sched_clock: Marking stable (679181511, 106531993)->(807765539, -22052035) Jul 2 06:51:54.815629 kernel: registered taskstats version 1 Jul 2 06:51:54.815638 kernel: Loading compiled-in X.509 certificates Jul 2 06:51:54.815646 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.96-flatcar: ad4c54fcfdf0a10b17828c4377e868762dc43797' Jul 2 06:51:54.815655 kernel: Key type .fscrypt registered Jul 2 06:51:54.815664 kernel: Key type fscrypt-provisioning registered Jul 2 06:51:54.815675 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 06:51:54.815684 kernel: ima: Allocated hash algorithm: sha1 Jul 2 06:51:54.815692 kernel: ima: No architecture policies found Jul 2 06:51:54.815701 kernel: clk: Disabling unused clocks Jul 2 06:51:54.815710 kernel: Freeing unused kernel image (initmem) memory: 47156K Jul 2 06:51:54.815719 kernel: Write protecting the kernel read-only data: 34816k Jul 2 06:51:54.815728 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 06:51:54.815746 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jul 2 06:51:54.815757 kernel: Run /init as init process Jul 2 06:51:54.815765 kernel: with arguments: Jul 2 06:51:54.815774 kernel: /init Jul 2 06:51:54.815783 kernel: with environment: Jul 2 06:51:54.815791 kernel: HOME=/ Jul 2 06:51:54.815800 kernel: TERM=linux Jul 2 06:51:54.815810 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 06:51:54.815835 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 06:51:54.815849 systemd[1]: Detected virtualization kvm. Jul 2 06:51:54.815859 systemd[1]: Detected architecture x86-64. Jul 2 06:51:54.815869 systemd[1]: Running in initrd. Jul 2 06:51:54.815879 systemd[1]: No hostname configured, using default hostname. Jul 2 06:51:54.815888 systemd[1]: Hostname set to . Jul 2 06:51:54.815899 systemd[1]: Initializing machine ID from VM UUID. Jul 2 06:51:54.815918 systemd[1]: Queued start job for default target initrd.target. Jul 2 06:51:54.815931 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 06:51:54.815944 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 06:51:54.815965 systemd[1]: Reached target paths.target - Path Units. Jul 2 06:51:54.815991 systemd[1]: Reached target slices.target - Slice Units. Jul 2 06:51:54.816017 systemd[1]: Reached target swap.target - Swaps. Jul 2 06:51:54.816043 systemd[1]: Reached target timers.target - Timer Units. Jul 2 06:51:54.816074 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 06:51:54.816089 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 06:51:54.816102 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jul 2 06:51:54.816112 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 06:51:54.816122 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 06:51:54.816132 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 06:51:54.816142 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 06:51:54.816152 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 06:51:54.816162 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 06:51:54.816172 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 06:51:54.816182 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 06:51:54.816193 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 06:51:54.816203 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 06:51:54.816213 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 06:51:54.816223 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jul 2 06:51:54.816235 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 06:51:54.816247 kernel: audit: type=1130 audit(1719903114.809:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.816257 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 06:51:54.816271 systemd-journald[195]: Journal started Jul 2 06:51:54.816323 systemd-journald[195]: Runtime Journal (/run/log/journal/7ce28d8a5893453d809cdd55203a102a) is 6.0M, max 48.4M, 42.3M free. Jul 2 06:51:54.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.817247 systemd-modules-load[196]: Inserted module 'overlay' Jul 2 06:51:54.851800 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 06:51:54.851829 kernel: Bridge firewalling registered Jul 2 06:51:54.851842 kernel: audit: type=1130 audit(1719903114.851:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.849707 systemd-modules-load[196]: Inserted module 'br_netfilter' Jul 2 06:51:54.855544 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 06:51:54.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.857936 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jul 2 06:51:54.865078 kernel: audit: type=1130 audit(1719903114.857:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.865094 kernel: audit: type=1130 audit(1719903114.860:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.870484 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 06:51:54.871718 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 06:51:54.874251 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 06:51:54.877525 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 06:51:54.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.881353 kernel: audit: type=1130 audit(1719903114.877:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.886915 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 06:51:54.888893 kernel: SCSI subsystem initialized Jul 2 06:51:54.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.890499 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 06:51:54.894774 kernel: audit: type=1130 audit(1719903114.889:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.897344 kernel: audit: type=1130 audit(1719903114.894:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.902348 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 06:51:54.902386 kernel: device-mapper: uevent: version 1.0.3 Jul 2 06:51:54.906512 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jul 2 06:51:54.906551 kernel: audit: type=1334 audit(1719903114.905:9): prog-id=6 op=LOAD Jul 2 06:51:54.905000 audit: BPF prog-id=6 op=LOAD Jul 2 06:51:54.903993 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 06:51:54.906653 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 06:51:54.907756 systemd-modules-load[196]: Inserted module 'dm_multipath' Jul 2 06:51:54.911680 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 06:51:54.916466 kernel: audit: type=1130 audit(1719903114.912:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.916990 dracut-cmdline[215]: dracut-dracut-053 Jul 2 06:51:54.919321 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 06:51:54.925123 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 06:51:54.939424 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:51:54.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.949074 systemd-resolved[217]: Positive Trust Anchors: Jul 2 06:51:54.949091 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 06:51:54.949121 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 06:51:54.951548 systemd-resolved[217]: Defaulting to hostname 'linux'. Jul 2 06:51:54.952381 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 06:51:54.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:54.959823 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 06:51:55.000357 kernel: Loading iSCSI transport class v2.0-870. Jul 2 06:51:55.014358 kernel: iscsi: registered transport (tcp) Jul 2 06:51:55.034390 kernel: iscsi: registered transport (qla4xxx) Jul 2 06:51:55.034428 kernel: QLogic iSCSI HBA Driver Jul 2 06:51:55.070149 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 06:51:55.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:55.076491 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 06:51:55.137408 kernel: raid6: avx2x4 gen() 30708 MB/s Jul 2 06:51:55.154393 kernel: raid6: avx2x2 gen() 32116 MB/s Jul 2 06:51:55.171466 kernel: raid6: avx2x1 gen() 25797 MB/s Jul 2 06:51:55.171528 kernel: raid6: using algorithm avx2x2 gen() 32116 MB/s Jul 2 06:51:55.189453 kernel: raid6: .... xor() 18583 MB/s, rmw enabled Jul 2 06:51:55.189490 kernel: raid6: using avx2x2 recovery algorithm Jul 2 06:51:55.193358 kernel: xor: automatically using best checksumming function avx Jul 2 06:51:55.339390 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 06:51:55.349084 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 06:51:55.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:55.351000 audit: BPF prog-id=7 op=LOAD Jul 2 06:51:55.351000 audit: BPF prog-id=8 op=LOAD Jul 2 06:51:55.369655 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 06:51:55.381954 systemd-udevd[398]: Using default interface naming scheme 'v252'. Jul 2 06:51:55.386009 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 06:51:55.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:55.389406 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 06:51:55.400262 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Jul 2 06:51:55.428019 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 06:51:55.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:55.436468 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 06:51:55.471777 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 06:51:55.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:55.513372 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 2 06:51:55.525782 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 2 06:51:55.525906 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 06:51:55.525917 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 06:51:55.525926 kernel: GPT:9289727 != 19775487 Jul 2 06:51:55.525933 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 06:51:55.525941 kernel: GPT:9289727 != 19775487 Jul 2 06:51:55.525949 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 06:51:55.525957 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:51:55.534857 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 06:51:55.534882 kernel: AES CTR mode by8 optimization enabled Jul 2 06:51:55.543344 kernel: libata version 3.00 loaded. Jul 2 06:51:55.545459 kernel: ata_piix 0000:00:01.1: version 2.13 Jul 2 06:51:55.557859 kernel: BTRFS: device fsid 1fca1e64-eeea-4360-9664-a9b6b3a60b6f devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (440) Jul 2 06:51:55.557879 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (449) Jul 2 06:51:55.557891 kernel: scsi host0: ata_piix Jul 2 06:51:55.558017 kernel: scsi host1: ata_piix Jul 2 06:51:55.558124 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc0c0 irq 14 Jul 2 06:51:55.558138 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc0c8 irq 15 Jul 2 06:51:55.551172 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 2 06:51:55.593590 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 2 06:51:55.596905 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 2 06:51:55.602423 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 06:51:55.608495 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 2 06:51:55.620600 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 06:51:55.628617 disk-uuid[515]: Primary Header is updated. Jul 2 06:51:55.628617 disk-uuid[515]: Secondary Entries is updated. Jul 2 06:51:55.628617 disk-uuid[515]: Secondary Header is updated. Jul 2 06:51:55.633353 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:51:55.636368 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:51:55.639352 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:51:55.716437 kernel: ata2: found unknown device (class 0) Jul 2 06:51:55.718360 kernel: ata2.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 2 06:51:55.720427 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 2 06:51:55.786423 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 2 06:51:55.810365 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 06:51:55.810378 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jul 2 06:51:56.636363 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 2 06:51:56.636430 disk-uuid[516]: The operation has completed successfully. Jul 2 06:51:56.658415 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 06:51:56.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:56.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:56.658514 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 06:51:56.683497 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 06:51:56.688467 sh[546]: Success Jul 2 06:51:56.700352 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 2 06:51:56.726891 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 06:51:56.738526 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 06:51:56.740539 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 06:51:56.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:56.753875 kernel: BTRFS info (device dm-0): first mount of filesystem 1fca1e64-eeea-4360-9664-a9b6b3a60b6f Jul 2 06:51:56.753938 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:51:56.753951 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 06:51:56.754908 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 06:51:56.756348 kernel: BTRFS info (device dm-0): using free space tree Jul 2 06:51:56.760134 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 06:51:56.761187 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 06:51:56.773487 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 06:51:56.775257 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 06:51:56.783944 kernel: BTRFS info (device vda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:51:56.783982 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:51:56.783994 kernel: BTRFS info (device vda6): using free space tree Jul 2 06:51:56.791409 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 06:51:56.793379 kernel: BTRFS info (device vda6): last unmount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:51:56.801623 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 06:51:56.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:56.810551 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 06:51:56.885886 ignition[648]: Ignition 2.15.0 Jul 2 06:51:56.885899 ignition[648]: Stage: fetch-offline Jul 2 06:51:56.885932 ignition[648]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:51:56.885939 ignition[648]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 06:51:56.886022 ignition[648]: parsed url from cmdline: "" Jul 2 06:51:56.886025 ignition[648]: no config URL provided Jul 2 06:51:56.886029 ignition[648]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 06:51:56.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:56.892000 audit: BPF prog-id=9 op=LOAD Jul 2 06:51:56.890703 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 06:51:56.886036 ignition[648]: no config at "/usr/lib/ignition/user.ign" Jul 2 06:51:56.886057 ignition[648]: op(1): [started] loading QEMU firmware config module Jul 2 06:51:56.886062 ignition[648]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 2 06:51:56.893418 ignition[648]: op(1): [finished] loading QEMU firmware config module Jul 2 06:51:56.907545 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 06:51:56.927895 systemd-networkd[735]: lo: Link UP Jul 2 06:51:56.927904 systemd-networkd[735]: lo: Gained carrier Jul 2 06:51:56.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:56.928516 systemd-networkd[735]: Enumeration completed Jul 2 06:51:56.928591 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 06:51:56.928879 systemd-networkd[735]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:51:56.928882 systemd-networkd[735]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 06:51:56.930342 systemd-networkd[735]: eth0: Link UP Jul 2 06:51:56.930344 systemd-networkd[735]: eth0: Gained carrier Jul 2 06:51:56.930349 systemd-networkd[735]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:51:56.930783 systemd[1]: Reached target network.target - Network. Jul 2 06:51:56.947882 ignition[648]: parsing config with SHA512: 7ea67723c05f39a62abf43f86677956a9689bc609e71a0017f5f66092a2137434944ecb2ef11d2f93ab62ea9f9decbe07158df31a1d78fac178cc70369d29d39 Jul 2 06:51:56.948130 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jul 2 06:51:56.951158 unknown[648]: fetched base config from "system" Jul 2 06:51:56.951606 unknown[648]: fetched user config from "qemu" Jul 2 06:51:56.952097 ignition[648]: fetch-offline: fetch-offline passed Jul 2 06:51:56.952155 ignition[648]: Ignition finished successfully Jul 2 06:51:56.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:56.953047 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 06:51:56.953686 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 2 06:51:56.954404 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 06:51:56.954407 systemd-networkd[735]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 06:51:57.001145 ignition[739]: Ignition 2.15.0 Jul 2 06:51:57.001879 ignition[739]: Stage: kargs Jul 2 06:51:57.002178 ignition[739]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:51:57.002188 ignition[739]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 06:51:57.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:57.004885 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 06:51:57.003222 ignition[739]: kargs: kargs passed Jul 2 06:51:57.003266 ignition[739]: Ignition finished successfully Jul 2 06:51:57.013567 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 06:51:57.023981 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jul 2 06:51:57.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:57.025078 systemd[1]: Starting iscsid.service - Open-iSCSI... Jul 2 06:51:57.029084 iscsid[754]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 06:51:57.029084 iscsid[754]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 06:51:57.029084 iscsid[754]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 06:51:57.029084 iscsid[754]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 06:51:57.029084 iscsid[754]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 06:51:57.029084 iscsid[754]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 06:51:57.029084 iscsid[754]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 06:51:57.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:57.030194 systemd[1]: Started iscsid.service - Open-iSCSI. Jul 2 06:51:57.030954 ignition[747]: Ignition 2.15.0 Jul 2 06:51:57.046000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:57.043517 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 06:51:57.031427 ignition[747]: Stage: disks Jul 2 06:51:57.045401 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 06:51:57.031566 ignition[747]: no configs at "/usr/lib/ignition/base.d" Jul 2 06:51:57.047000 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 06:51:57.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:57.031576 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 06:51:57.049049 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 06:51:57.032442 ignition[747]: disks: disks passed Jul 2 06:51:57.050757 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 06:51:57.032483 ignition[747]: Ignition finished successfully Jul 2 06:51:57.052780 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 06:51:57.054700 systemd[1]: Reached target basic.target - Basic System. Jul 2 06:51:57.055665 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 06:51:57.058171 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 06:51:57.058631 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 06:51:57.059001 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 06:51:57.069579 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 06:51:57.079350 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 06:51:57.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:57.080486 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 06:51:57.090210 systemd-fsck[773]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 06:51:57.097466 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 06:51:57.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:57.112476 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 06:51:57.227394 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Quota mode: none. Jul 2 06:51:57.228177 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 06:51:57.229131 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 06:51:57.241419 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 06:51:57.243154 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 06:51:57.244643 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 06:51:57.253308 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (779) Jul 2 06:51:57.253352 kernel: BTRFS info (device vda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:51:57.253368 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:51:57.253381 kernel: BTRFS info (device vda6): using free space tree Jul 2 06:51:57.244695 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 06:51:57.244720 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 06:51:57.247765 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 06:51:57.254194 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 06:51:57.258509 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 06:51:57.287191 initrd-setup-root[803]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 06:51:57.290884 initrd-setup-root[810]: cut: /sysroot/etc/group: No such file or directory Jul 2 06:51:57.293843 initrd-setup-root[817]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 06:51:57.296647 initrd-setup-root[824]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 06:51:57.357692 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 06:51:57.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:57.365475 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 06:51:57.367250 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 06:51:57.373044 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 06:51:57.374626 kernel: BTRFS info (device vda6): last unmount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:51:57.387120 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 06:51:57.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:57.430011 ignition[893]: INFO : Ignition 2.15.0 Jul 2 06:51:57.430011 ignition[893]: INFO : Stage: mount Jul 2 06:51:57.431795 ignition[893]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 06:51:57.431795 ignition[893]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 06:51:57.431795 ignition[893]: INFO : mount: mount passed Jul 2 06:51:57.431795 ignition[893]: INFO : Ignition finished successfully Jul 2 06:51:57.436154 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 06:51:57.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:57.445470 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 06:51:58.241711 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 06:51:58.249879 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (902) Jul 2 06:51:58.249965 kernel: BTRFS info (device vda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 06:51:58.249979 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 06:51:58.250305 kernel: BTRFS info (device vda6): using free space tree Jul 2 06:51:58.254812 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 06:51:58.283956 ignition[920]: INFO : Ignition 2.15.0 Jul 2 06:51:58.283956 ignition[920]: INFO : Stage: files Jul 2 06:51:58.285820 ignition[920]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 06:51:58.285820 ignition[920]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 06:51:58.285820 ignition[920]: DEBUG : files: compiled without relabeling support, skipping Jul 2 06:51:58.289530 ignition[920]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 06:51:58.289530 ignition[920]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 06:51:58.289530 ignition[920]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 06:51:58.294053 ignition[920]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 06:51:58.294053 ignition[920]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 06:51:58.294053 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 06:51:58.294053 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 06:51:58.291575 unknown[920]: wrote ssh authorized keys file for user: core Jul 2 06:51:58.324355 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 06:51:58.407759 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 06:51:58.407759 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 06:51:58.412624 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 06:51:58.412624 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 06:51:58.412624 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 06:51:58.412624 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 06:51:58.412624 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 06:51:58.412624 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 06:51:58.412624 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 06:51:58.412624 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 06:51:58.412624 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 06:51:58.412624 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 06:51:58.412624 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 06:51:58.412624 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 06:51:58.412624 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 06:51:58.875967 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 06:51:58.987965 systemd-networkd[735]: eth0: Gained IPv6LL Jul 2 06:51:59.505196 ignition[920]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 06:51:59.505196 ignition[920]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 06:51:59.509142 ignition[920]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 06:51:59.509142 ignition[920]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 06:51:59.509142 ignition[920]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 06:51:59.509142 ignition[920]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 2 06:51:59.509142 ignition[920]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 06:51:59.509142 ignition[920]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 2 06:51:59.509142 ignition[920]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 2 06:51:59.509142 ignition[920]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 2 06:51:59.509142 ignition[920]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 06:51:59.528560 ignition[920]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 2 06:51:59.530250 ignition[920]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 2 06:51:59.530250 ignition[920]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 2 06:51:59.530250 ignition[920]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 06:51:59.530250 ignition[920]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 06:51:59.530250 ignition[920]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 06:51:59.530250 ignition[920]: INFO : files: files passed Jul 2 06:51:59.530250 ignition[920]: INFO : Ignition finished successfully Jul 2 06:51:59.552812 kernel: kauditd_printk_skb: 27 callbacks suppressed Jul 2 06:51:59.552844 kernel: audit: type=1130 audit(1719903119.532:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.552857 kernel: audit: type=1130 audit(1719903119.546:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.552866 kernel: audit: type=1131 audit(1719903119.546:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.530260 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 06:51:59.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.539755 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 06:51:59.560382 kernel: audit: type=1130 audit(1719903119.555:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.542718 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 06:51:59.544746 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 06:51:59.562915 initrd-setup-root-after-ignition[944]: grep: /sysroot/oem/oem-release: No such file or directory Jul 2 06:51:59.544852 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 06:51:59.565598 initrd-setup-root-after-ignition[946]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 06:51:59.565598 initrd-setup-root-after-ignition[946]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 06:51:59.554009 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 06:51:59.570096 initrd-setup-root-after-ignition[950]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 06:51:59.555433 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 06:51:59.574590 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 06:51:59.589155 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 06:51:59.589256 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 06:51:59.598322 kernel: audit: type=1130 audit(1719903119.591:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.598352 kernel: audit: type=1131 audit(1719903119.591:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.591262 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 06:51:59.598337 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 06:51:59.599522 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 06:51:59.609483 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 06:51:59.621340 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 06:51:59.626235 kernel: audit: type=1130 audit(1719903119.621:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.626297 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 06:51:59.637983 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 06:51:59.638662 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 06:51:59.640745 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 06:51:59.642695 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 06:51:59.648407 kernel: audit: type=1131 audit(1719903119.643:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.642795 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 06:51:59.644299 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 06:51:59.648964 systemd[1]: Stopped target basic.target - Basic System. Jul 2 06:51:59.649277 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 06:51:59.652651 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 06:51:59.652950 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 06:51:59.656755 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 06:51:59.658808 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 06:51:59.661638 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 06:51:59.663594 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 06:51:59.663910 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jul 2 06:51:59.667647 systemd[1]: Stopped target swap.target - Swaps. Jul 2 06:51:59.669736 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 06:51:59.675115 kernel: audit: type=1131 audit(1719903119.670:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.669850 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 06:51:59.671139 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 06:51:59.681866 kernel: audit: type=1131 audit(1719903119.677:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.675723 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 06:51:59.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.675809 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 06:51:59.677897 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 06:51:59.677982 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 06:51:59.682386 systemd[1]: Stopped target paths.target - Path Units. Jul 2 06:51:59.684804 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 06:51:59.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.684906 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 06:51:59.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.686956 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 06:51:59.688937 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 06:51:59.689492 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 06:51:59.689576 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 06:51:59.692193 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 06:51:59.692291 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 06:51:59.709680 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 06:51:59.711678 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jul 2 06:51:59.712458 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 06:51:59.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.715342 iscsid[754]: iscsid shutting down. Jul 2 06:51:59.712569 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 06:51:59.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.715298 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 06:51:59.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.716200 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 06:51:59.716363 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 06:51:59.718156 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 06:51:59.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.718281 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 06:51:59.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.728806 ignition[964]: INFO : Ignition 2.15.0 Jul 2 06:51:59.728806 ignition[964]: INFO : Stage: umount Jul 2 06:51:59.722389 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 06:51:59.732556 ignition[964]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 06:51:59.732556 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 2 06:51:59.732556 ignition[964]: INFO : umount: umount passed Jul 2 06:51:59.732556 ignition[964]: INFO : Ignition finished successfully Jul 2 06:51:59.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.722473 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jul 2 06:51:59.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.724679 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 06:51:59.724762 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 06:51:59.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.745000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.727424 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 06:51:59.727452 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 06:51:59.729168 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jul 2 06:51:59.732286 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 06:51:59.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.732698 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 06:51:59.732792 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jul 2 06:51:59.755000 audit: BPF prog-id=6 op=UNLOAD Jul 2 06:51:59.734796 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 06:51:59.734868 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 06:51:59.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.736763 systemd[1]: Stopped target network.target - Network. Jul 2 06:51:59.738252 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 06:51:59.738286 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 06:51:59.740133 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 06:51:59.740166 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 06:51:59.741888 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 06:51:59.741923 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 06:51:59.743801 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 06:51:59.743833 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 06:51:59.746055 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 06:51:59.747957 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 06:51:59.750154 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 06:51:59.772000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.750237 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 06:51:59.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.752423 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 06:51:59.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.752463 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 06:51:59.754861 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 2 06:51:59.755382 systemd-networkd[735]: eth0: DHCPv6 lease lost Jul 2 06:51:59.784000 audit: BPF prog-id=9 op=UNLOAD Jul 2 06:51:59.756400 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 06:51:59.756501 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 06:51:59.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.758390 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 06:51:59.758419 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 06:51:59.769461 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 06:51:59.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.770558 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 06:51:59.770613 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 06:51:59.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.772779 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 06:51:59.772812 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:51:59.775000 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 06:51:59.775034 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 06:51:59.775397 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 06:51:59.776254 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 06:51:59.780110 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 06:51:59.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.780203 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 06:51:59.785568 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 06:51:59.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.811000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.785684 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 06:51:59.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:51:59.787725 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 06:51:59.787761 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 06:51:59.789629 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 06:51:59.789655 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 06:51:59.791592 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 06:51:59.791635 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 06:51:59.793705 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 06:51:59.793739 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 06:51:59.795603 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 06:51:59.795636 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 06:51:59.803504 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 06:51:59.805115 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 06:51:59.805162 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jul 2 06:51:59.807672 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 06:51:59.807749 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 06:51:59.809397 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 06:51:59.809461 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 06:51:59.811366 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 06:51:59.813142 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 06:51:59.813182 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 06:51:59.823528 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 06:51:59.830395 systemd[1]: Switching root. Jul 2 06:51:59.849051 systemd-journald[195]: Journal stopped Jul 2 06:52:00.737077 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Jul 2 06:52:00.737125 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jul 2 06:52:00.737138 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 06:52:00.737150 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 06:52:00.737159 kernel: SELinux: policy capability open_perms=1 Jul 2 06:52:00.737167 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 06:52:00.737180 kernel: SELinux: policy capability always_check_network=0 Jul 2 06:52:00.737192 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 06:52:00.737203 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 06:52:00.737214 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 06:52:00.737225 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 06:52:00.737235 systemd[1]: Successfully loaded SELinux policy in 39.300ms. Jul 2 06:52:00.737249 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.611ms. Jul 2 06:52:00.737260 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 06:52:00.737270 systemd[1]: Detected virtualization kvm. Jul 2 06:52:00.737280 systemd[1]: Detected architecture x86-64. Jul 2 06:52:00.737289 systemd[1]: Detected first boot. Jul 2 06:52:00.737299 systemd[1]: Initializing machine ID from VM UUID. Jul 2 06:52:00.737311 systemd[1]: Populated /etc with preset unit settings. Jul 2 06:52:00.737321 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 06:52:00.737347 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 06:52:00.737361 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 06:52:00.737371 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 06:52:00.737381 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 06:52:00.737390 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 06:52:00.737401 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 06:52:00.737411 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 06:52:00.737421 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 06:52:00.737430 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 06:52:00.737443 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 06:52:00.737456 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 06:52:00.737466 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 06:52:00.737477 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 06:52:00.737487 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 06:52:00.737498 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 06:52:00.737507 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 06:52:00.737517 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 06:52:00.737527 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 06:52:00.737536 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 06:52:00.737547 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 06:52:00.737556 systemd[1]: Reached target slices.target - Slice Units. Jul 2 06:52:00.737567 systemd[1]: Reached target swap.target - Swaps. Jul 2 06:52:00.737584 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 06:52:00.737595 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 06:52:00.737604 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jul 2 06:52:00.737614 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 06:52:00.737624 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 06:52:00.737633 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 06:52:00.737642 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 06:52:00.737652 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 06:52:00.737663 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 06:52:00.737673 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 06:52:00.737683 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:52:00.737693 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 06:52:00.737703 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 06:52:00.737712 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 06:52:00.737722 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 06:52:00.737731 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:52:00.737741 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 06:52:00.737752 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 06:52:00.737763 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:52:00.737772 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 06:52:00.737782 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:52:00.737792 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 06:52:00.737801 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 06:52:00.737811 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 06:52:00.737821 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 06:52:00.737832 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 06:52:00.737841 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 06:52:00.737850 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 06:52:00.737860 systemd[1]: Stopped systemd-journald.service - Journal Service. Jul 2 06:52:00.737869 kernel: fuse: init (API version 7.37) Jul 2 06:52:00.737878 kernel: loop: module loaded Jul 2 06:52:00.737887 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 06:52:00.737897 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 06:52:00.737908 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 06:52:00.737919 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 06:52:00.737931 systemd-journald[1071]: Journal started Jul 2 06:52:00.737966 systemd-journald[1071]: Runtime Journal (/run/log/journal/7ce28d8a5893453d809cdd55203a102a) is 6.0M, max 48.4M, 42.3M free. Jul 2 06:51:59.910000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 06:52:00.185000 audit: BPF prog-id=10 op=LOAD Jul 2 06:52:00.185000 audit: BPF prog-id=10 op=UNLOAD Jul 2 06:52:00.186000 audit: BPF prog-id=11 op=LOAD Jul 2 06:52:00.186000 audit: BPF prog-id=11 op=UNLOAD Jul 2 06:52:00.574000 audit: BPF prog-id=12 op=LOAD Jul 2 06:52:00.574000 audit: BPF prog-id=3 op=UNLOAD Jul 2 06:52:00.574000 audit: BPF prog-id=13 op=LOAD Jul 2 06:52:00.574000 audit: BPF prog-id=14 op=LOAD Jul 2 06:52:00.574000 audit: BPF prog-id=4 op=UNLOAD Jul 2 06:52:00.574000 audit: BPF prog-id=5 op=UNLOAD Jul 2 06:52:00.575000 audit: BPF prog-id=15 op=LOAD Jul 2 06:52:00.575000 audit: BPF prog-id=12 op=UNLOAD Jul 2 06:52:00.575000 audit: BPF prog-id=16 op=LOAD Jul 2 06:52:00.575000 audit: BPF prog-id=17 op=LOAD Jul 2 06:52:00.575000 audit: BPF prog-id=13 op=UNLOAD Jul 2 06:52:00.575000 audit: BPF prog-id=14 op=UNLOAD Jul 2 06:52:00.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.587000 audit: BPF prog-id=15 op=UNLOAD Jul 2 06:52:00.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.717000 audit: BPF prog-id=18 op=LOAD Jul 2 06:52:00.717000 audit: BPF prog-id=19 op=LOAD Jul 2 06:52:00.717000 audit: BPF prog-id=20 op=LOAD Jul 2 06:52:00.717000 audit: BPF prog-id=16 op=UNLOAD Jul 2 06:52:00.717000 audit: BPF prog-id=17 op=UNLOAD Jul 2 06:52:00.735000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 06:52:00.735000 audit[1071]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffc377e1c20 a2=4000 a3=7ffc377e1cbc items=0 ppid=1 pid=1071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:00.735000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 06:52:00.565356 systemd[1]: Queued start job for default target multi-user.target. Jul 2 06:52:00.565368 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 2 06:52:00.576625 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 06:52:00.742445 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 06:52:00.744413 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 06:52:00.744457 systemd[1]: Stopped verity-setup.service. Jul 2 06:52:00.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.747369 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:52:00.749360 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 06:52:00.749396 kernel: ACPI: bus type drm_connector registered Jul 2 06:52:00.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.751498 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 06:52:00.752680 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 06:52:00.753888 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 06:52:00.754947 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 06:52:00.756116 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 06:52:00.757294 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 06:52:00.758622 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 06:52:00.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.759954 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 06:52:00.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.761274 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 06:52:00.761428 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 06:52:00.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.762757 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:52:00.762878 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:52:00.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.764211 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 06:52:00.764468 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 06:52:00.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.765761 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:52:00.765881 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:52:00.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.767189 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 06:52:00.767305 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 06:52:00.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.768729 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 06:52:00.768845 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 06:52:00.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.770253 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 06:52:00.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.771601 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 06:52:00.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.772981 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 06:52:00.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.774835 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 06:52:00.785616 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 06:52:00.788798 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 06:52:00.790107 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 06:52:00.792766 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 06:52:00.795809 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 06:52:00.797129 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:52:00.798430 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jul 2 06:52:00.799900 systemd-journald[1071]: Time spent on flushing to /var/log/journal/7ce28d8a5893453d809cdd55203a102a is 23.343ms for 1070 entries. Jul 2 06:52:00.799900 systemd-journald[1071]: System Journal (/var/log/journal/7ce28d8a5893453d809cdd55203a102a) is 8.0M, max 195.6M, 187.6M free. Jul 2 06:52:00.835610 systemd-journald[1071]: Received client request to flush runtime journal. Jul 2 06:52:00.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.799806 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 06:52:00.801598 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 06:52:00.804510 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 06:52:00.810107 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 06:52:00.811757 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 06:52:00.837143 udevadm[1096]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 06:52:00.813373 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 06:52:00.820686 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 06:52:00.822358 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jul 2 06:52:00.824287 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 06:52:00.825897 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 06:52:00.836844 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 06:52:00.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:00.838794 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 06:52:00.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:01.310366 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 06:52:01.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:01.311000 audit: BPF prog-id=21 op=LOAD Jul 2 06:52:01.311000 audit: BPF prog-id=22 op=LOAD Jul 2 06:52:01.311000 audit: BPF prog-id=7 op=UNLOAD Jul 2 06:52:01.311000 audit: BPF prog-id=8 op=UNLOAD Jul 2 06:52:01.319531 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 06:52:01.337325 systemd-udevd[1098]: Using default interface naming scheme 'v252'. Jul 2 06:52:01.350603 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 06:52:01.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:01.353000 audit: BPF prog-id=23 op=LOAD Jul 2 06:52:01.359480 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 06:52:01.362000 audit: BPF prog-id=24 op=LOAD Jul 2 06:52:01.362000 audit: BPF prog-id=25 op=LOAD Jul 2 06:52:01.362000 audit: BPF prog-id=26 op=LOAD Jul 2 06:52:01.363658 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 06:52:01.371853 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 06:52:01.387370 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1102) Jul 2 06:52:01.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:01.398347 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 06:52:01.410358 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1109) Jul 2 06:52:01.424186 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 2 06:52:01.427428 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 06:52:01.431419 kernel: ACPI: button: Power Button [PWRF] Jul 2 06:52:01.437923 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 06:52:01.450496 systemd-networkd[1104]: lo: Link UP Jul 2 06:52:01.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:01.450804 systemd-networkd[1104]: lo: Gained carrier Jul 2 06:52:01.451187 systemd-networkd[1104]: Enumeration completed Jul 2 06:52:01.451275 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 06:52:01.454191 systemd-networkd[1104]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:52:01.454246 systemd-networkd[1104]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 06:52:01.455671 systemd-networkd[1104]: eth0: Link UP Jul 2 06:52:01.455741 systemd-networkd[1104]: eth0: Gained carrier Jul 2 06:52:01.455792 systemd-networkd[1104]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 06:52:01.459592 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 06:52:01.464362 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jul 2 06:52:01.469232 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 06:52:01.473514 systemd-networkd[1104]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 2 06:52:01.556845 kernel: SVM: TSC scaling supported Jul 2 06:52:01.556976 kernel: kvm: Nested Virtualization enabled Jul 2 06:52:01.556993 kernel: SVM: kvm: Nested Paging enabled Jul 2 06:52:01.557819 kernel: SVM: Virtual VMLOAD VMSAVE supported Jul 2 06:52:01.557844 kernel: SVM: Virtual GIF supported Jul 2 06:52:01.558726 kernel: SVM: LBR virtualization supported Jul 2 06:52:01.574366 kernel: EDAC MC: Ver: 3.0.0 Jul 2 06:52:01.610820 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 06:52:01.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:01.620607 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 06:52:01.627778 lvm[1135]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 06:52:01.652854 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 06:52:01.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:01.654300 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 06:52:01.665560 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 06:52:01.670027 lvm[1136]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 06:52:01.696204 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 06:52:01.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:01.697436 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 06:52:01.698537 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 06:52:01.698568 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 06:52:01.699588 systemd[1]: Reached target machines.target - Containers. Jul 2 06:52:01.712555 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 06:52:01.713919 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:52:01.713979 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:52:01.715370 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jul 2 06:52:01.717912 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 06:52:01.720359 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 06:52:01.722966 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 06:52:01.724444 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1138 (bootctl) Jul 2 06:52:01.725515 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jul 2 06:52:01.731392 kernel: loop0: detected capacity change from 0 to 139360 Jul 2 06:52:01.739961 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 06:52:01.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:01.773371 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 06:52:01.883368 systemd-fsck[1146]: fsck.fat 4.2 (2021-01-31) Jul 2 06:52:01.883368 systemd-fsck[1146]: /dev/vda1: 808 files, 120378/258078 clusters Jul 2 06:52:01.885223 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jul 2 06:52:01.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:01.891355 kernel: loop1: detected capacity change from 0 to 80600 Jul 2 06:52:01.891651 systemd[1]: Mounting boot.mount - Boot partition... Jul 2 06:52:01.899751 systemd[1]: Mounted boot.mount - Boot partition. Jul 2 06:52:01.913747 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jul 2 06:52:01.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:01.920357 kernel: loop2: detected capacity change from 0 to 211296 Jul 2 06:52:01.932611 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 06:52:01.933187 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 06:52:01.933000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:01.954356 kernel: loop3: detected capacity change from 0 to 139360 Jul 2 06:52:01.967404 kernel: loop4: detected capacity change from 0 to 80600 Jul 2 06:52:01.976348 kernel: loop5: detected capacity change from 0 to 211296 Jul 2 06:52:01.982211 (sd-sysext)[1152]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 2 06:52:01.982724 (sd-sysext)[1152]: Merged extensions into '/usr'. Jul 2 06:52:01.984422 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 06:52:01.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:01.991613 systemd[1]: Starting ensure-sysext.service... Jul 2 06:52:01.993949 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 06:52:02.011895 systemd[1]: Reloading. Jul 2 06:52:02.014416 systemd-tmpfiles[1154]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 06:52:02.016022 systemd-tmpfiles[1154]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 06:52:02.016693 systemd-tmpfiles[1154]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 06:52:02.017926 systemd-tmpfiles[1154]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 06:52:02.098276 ldconfig[1137]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 06:52:02.158800 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 06:52:02.225000 audit: BPF prog-id=27 op=LOAD Jul 2 06:52:02.225000 audit: BPF prog-id=24 op=UNLOAD Jul 2 06:52:02.225000 audit: BPF prog-id=28 op=LOAD Jul 2 06:52:02.225000 audit: BPF prog-id=29 op=LOAD Jul 2 06:52:02.225000 audit: BPF prog-id=25 op=UNLOAD Jul 2 06:52:02.225000 audit: BPF prog-id=26 op=UNLOAD Jul 2 06:52:02.226000 audit: BPF prog-id=30 op=LOAD Jul 2 06:52:02.226000 audit: BPF prog-id=31 op=LOAD Jul 2 06:52:02.226000 audit: BPF prog-id=21 op=UNLOAD Jul 2 06:52:02.226000 audit: BPF prog-id=22 op=UNLOAD Jul 2 06:52:02.227000 audit: BPF prog-id=32 op=LOAD Jul 2 06:52:02.227000 audit: BPF prog-id=18 op=UNLOAD Jul 2 06:52:02.227000 audit: BPF prog-id=33 op=LOAD Jul 2 06:52:02.227000 audit: BPF prog-id=34 op=LOAD Jul 2 06:52:02.227000 audit: BPF prog-id=19 op=UNLOAD Jul 2 06:52:02.227000 audit: BPF prog-id=20 op=UNLOAD Jul 2 06:52:02.228000 audit: BPF prog-id=35 op=LOAD Jul 2 06:52:02.228000 audit: BPF prog-id=23 op=UNLOAD Jul 2 06:52:02.231879 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 06:52:02.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:02.234843 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 06:52:02.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:02.240389 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 06:52:02.243693 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 06:52:02.246490 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 06:52:02.248000 audit: BPF prog-id=36 op=LOAD Jul 2 06:52:02.249959 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 06:52:02.251000 audit: BPF prog-id=37 op=LOAD Jul 2 06:52:02.253988 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 06:52:02.258539 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 06:52:02.266000 audit[1222]: SYSTEM_BOOT pid=1222 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 06:52:02.270973 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:52:02.271303 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:52:02.274129 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:52:02.278227 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:52:02.280744 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 06:52:02.281944 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:52:02.282098 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:52:02.282240 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:52:02.283571 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 06:52:02.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:02.285510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:52:02.285798 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:52:02.287000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:02.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:02.288713 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:52:02.289188 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:52:02.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:02.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:02.290908 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 06:52:02.291159 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 06:52:02.292000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:02.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:02.294658 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:52:02.294811 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 06:52:02.301000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 06:52:02.301000 audit[1234]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc05377d60 a2=420 a3=0 items=0 ppid=1211 pid=1234 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:02.301000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 06:52:02.301898 augenrules[1234]: No rules Jul 2 06:52:02.305680 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 06:52:02.307513 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 06:52:02.309110 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 06:52:02.310700 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 06:52:02.314149 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:52:02.314322 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:52:02.315723 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:52:02.318223 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:52:02.321538 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 06:52:02.322778 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:52:02.322877 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:52:02.322961 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 06:52:02.323021 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:52:02.323875 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 06:52:02.325371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:52:02.325472 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:52:02.326938 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:52:02.327045 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:52:02.327321 systemd-resolved[1215]: Positive Trust Anchors: Jul 2 06:52:02.327378 systemd-resolved[1215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 06:52:02.327412 systemd-resolved[1215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 06:52:02.328491 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 06:52:02.328604 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 06:52:02.330134 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:52:02.330227 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 06:52:02.331279 systemd-resolved[1215]: Defaulting to hostname 'linux'. Jul 2 06:52:02.332289 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:52:02.332601 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 06:52:02.340778 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 06:52:03.693453 systemd-resolved[1215]: Clock change detected. Flushing caches. Jul 2 06:52:03.693477 systemd-timesyncd[1221]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 2 06:52:03.693523 systemd-timesyncd[1221]: Initial clock synchronization to Tue 2024-07-02 06:52:03.693390 UTC. Jul 2 06:52:03.694139 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 06:52:03.696573 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 06:52:03.698909 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 06:52:03.700129 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 06:52:03.700297 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:52:03.700464 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 06:52:03.700574 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 06:52:03.701437 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 06:52:03.702902 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 06:52:03.704749 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 06:52:03.704942 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 06:52:03.706678 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 06:52:03.706819 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 06:52:03.708300 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 06:52:03.708424 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 06:52:03.709954 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 06:52:03.710094 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 06:52:03.712073 systemd[1]: Reached target network.target - Network. Jul 2 06:52:03.713074 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 06:52:03.714274 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 06:52:03.715329 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 06:52:03.715356 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 06:52:03.716486 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 06:52:03.717631 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 06:52:03.718896 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 06:52:03.720072 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 06:52:03.721190 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 06:52:03.722308 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 06:52:03.722337 systemd[1]: Reached target paths.target - Path Units. Jul 2 06:52:03.723250 systemd[1]: Reached target timers.target - Timer Units. Jul 2 06:52:03.724838 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 06:52:03.727565 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 06:52:03.738095 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 06:52:03.739290 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:52:03.739351 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 06:52:03.740021 systemd[1]: Finished ensure-sysext.service. Jul 2 06:52:03.740988 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 06:52:03.742944 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 06:52:03.743910 systemd[1]: Reached target basic.target - Basic System. Jul 2 06:52:03.744871 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 06:52:03.744893 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 06:52:03.746025 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 06:52:03.748403 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 06:52:03.751263 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 06:52:03.754115 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 06:52:03.755229 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 06:52:03.755642 jq[1252]: false Jul 2 06:52:03.757035 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 06:52:03.759858 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 06:52:03.762524 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 06:52:03.765438 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 06:52:03.769366 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 06:52:03.771088 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 06:52:03.771172 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 06:52:03.771606 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 06:52:03.772718 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 06:52:03.774771 extend-filesystems[1253]: Found loop3 Jul 2 06:52:03.775717 extend-filesystems[1253]: Found loop4 Jul 2 06:52:03.775717 extend-filesystems[1253]: Found loop5 Jul 2 06:52:03.775717 extend-filesystems[1253]: Found sr0 Jul 2 06:52:03.775717 extend-filesystems[1253]: Found vda Jul 2 06:52:03.775717 extend-filesystems[1253]: Found vda1 Jul 2 06:52:03.781525 extend-filesystems[1253]: Found vda2 Jul 2 06:52:03.781525 extend-filesystems[1253]: Found vda3 Jul 2 06:52:03.781525 extend-filesystems[1253]: Found usr Jul 2 06:52:03.781525 extend-filesystems[1253]: Found vda4 Jul 2 06:52:03.781525 extend-filesystems[1253]: Found vda6 Jul 2 06:52:03.781525 extend-filesystems[1253]: Found vda7 Jul 2 06:52:03.781525 extend-filesystems[1253]: Found vda9 Jul 2 06:52:03.781525 extend-filesystems[1253]: Checking size of /dev/vda9 Jul 2 06:52:03.775724 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 06:52:03.790225 dbus-daemon[1251]: [system] SELinux support is enabled Jul 2 06:52:03.780446 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 06:52:03.818427 update_engine[1267]: I0702 06:52:03.790066 1267 main.cc:92] Flatcar Update Engine starting Jul 2 06:52:03.818427 update_engine[1267]: I0702 06:52:03.794266 1267 update_check_scheduler.cc:74] Next update check in 8m17s Jul 2 06:52:03.818727 extend-filesystems[1253]: Resized partition /dev/vda9 Jul 2 06:52:03.780671 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 06:52:03.820381 jq[1268]: true Jul 2 06:52:03.781107 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 06:52:03.781322 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 06:52:03.828895 tar[1271]: linux-amd64/helm Jul 2 06:52:03.783360 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 06:52:03.783578 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 06:52:03.829257 jq[1276]: true Jul 2 06:52:03.790380 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 06:52:03.806358 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 06:52:03.806394 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 06:52:03.807614 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 06:52:03.807632 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 06:52:03.808919 systemd[1]: Started update-engine.service - Update Engine. Jul 2 06:52:03.812239 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 06:52:03.835754 extend-filesystems[1279]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 06:52:03.850787 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1118) Jul 2 06:52:03.882036 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 2 06:52:03.889272 systemd-logind[1264]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 06:52:03.889307 systemd-logind[1264]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 06:52:03.889573 systemd-logind[1264]: New seat seat0. Jul 2 06:52:03.892055 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 06:52:03.917083 locksmithd[1280]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 06:52:04.093810 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 2 06:52:04.220849 extend-filesystems[1279]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 2 06:52:04.220849 extend-filesystems[1279]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 06:52:04.220849 extend-filesystems[1279]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 2 06:52:04.236468 extend-filesystems[1253]: Resized filesystem in /dev/vda9 Jul 2 06:52:04.237918 bash[1296]: Updated "/home/core/.ssh/authorized_keys" Jul 2 06:52:04.222405 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 06:52:04.222570 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 06:52:04.226422 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 06:52:04.238690 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 06:52:04.286758 containerd[1272]: time="2024-07-02T06:52:04.286673389Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jul 2 06:52:04.311559 containerd[1272]: time="2024-07-02T06:52:04.311503674Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 06:52:04.311559 containerd[1272]: time="2024-07-02T06:52:04.311556784Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:52:04.312899 containerd[1272]: time="2024-07-02T06:52:04.312872030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:52:04.312942 containerd[1272]: time="2024-07-02T06:52:04.312899652Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:52:04.313140 containerd[1272]: time="2024-07-02T06:52:04.313122310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:52:04.313184 containerd[1272]: time="2024-07-02T06:52:04.313139843Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 06:52:04.313232 containerd[1272]: time="2024-07-02T06:52:04.313219172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 06:52:04.313276 containerd[1272]: time="2024-07-02T06:52:04.313263555Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:52:04.313298 containerd[1272]: time="2024-07-02T06:52:04.313277240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 06:52:04.313343 containerd[1272]: time="2024-07-02T06:52:04.313330110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:52:04.313542 containerd[1272]: time="2024-07-02T06:52:04.313528001Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 06:52:04.313569 containerd[1272]: time="2024-07-02T06:52:04.313547838Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 06:52:04.313569 containerd[1272]: time="2024-07-02T06:52:04.313557205Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 06:52:04.313670 containerd[1272]: time="2024-07-02T06:52:04.313655690Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 06:52:04.313694 containerd[1272]: time="2024-07-02T06:52:04.313670288Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 06:52:04.313725 containerd[1272]: time="2024-07-02T06:52:04.313713428Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 06:52:04.313744 containerd[1272]: time="2024-07-02T06:52:04.313726433Z" level=info msg="metadata content store policy set" policy=shared Jul 2 06:52:04.320755 containerd[1272]: time="2024-07-02T06:52:04.320714418Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 06:52:04.320755 containerd[1272]: time="2024-07-02T06:52:04.320741579Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 06:52:04.320850 containerd[1272]: time="2024-07-02T06:52:04.320761867Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 06:52:04.320850 containerd[1272]: time="2024-07-02T06:52:04.320796582Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 06:52:04.320850 containerd[1272]: time="2024-07-02T06:52:04.320813283Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 06:52:04.320850 containerd[1272]: time="2024-07-02T06:52:04.320823322Z" level=info msg="NRI interface is disabled by configuration." Jul 2 06:52:04.320850 containerd[1272]: time="2024-07-02T06:52:04.320833150Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 06:52:04.320997 containerd[1272]: time="2024-07-02T06:52:04.320918951Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 06:52:04.320997 containerd[1272]: time="2024-07-02T06:52:04.320933238Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 06:52:04.320997 containerd[1272]: time="2024-07-02T06:52:04.320944199Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 06:52:04.320997 containerd[1272]: time="2024-07-02T06:52:04.320956261Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 06:52:04.320997 containerd[1272]: time="2024-07-02T06:52:04.320978683Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 06:52:04.320997 containerd[1272]: time="2024-07-02T06:52:04.320992579Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 06:52:04.321139 containerd[1272]: time="2024-07-02T06:52:04.321003510Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 06:52:04.321139 containerd[1272]: time="2024-07-02T06:52:04.321014551Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 06:52:04.321139 containerd[1272]: time="2024-07-02T06:52:04.321026563Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 06:52:04.321139 containerd[1272]: time="2024-07-02T06:52:04.321038275Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 06:52:04.321139 containerd[1272]: time="2024-07-02T06:52:04.321050047Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 06:52:04.321139 containerd[1272]: time="2024-07-02T06:52:04.321059866Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 06:52:04.321282 containerd[1272]: time="2024-07-02T06:52:04.321139064Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 06:52:04.321360 containerd[1272]: time="2024-07-02T06:52:04.321340111Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 06:52:04.321410 containerd[1272]: time="2024-07-02T06:52:04.321368024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.321410 containerd[1272]: time="2024-07-02T06:52:04.321380737Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 06:52:04.321410 containerd[1272]: time="2024-07-02T06:52:04.321399392Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 06:52:04.321493 containerd[1272]: time="2024-07-02T06:52:04.321441161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.321493 containerd[1272]: time="2024-07-02T06:52:04.321452021Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.321493 containerd[1272]: time="2024-07-02T06:52:04.321462431Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.321493 containerd[1272]: time="2024-07-02T06:52:04.321472129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.321493 containerd[1272]: time="2024-07-02T06:52:04.321483350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.321620 containerd[1272]: time="2024-07-02T06:52:04.321494190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.321620 containerd[1272]: time="2024-07-02T06:52:04.321505361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.321620 containerd[1272]: time="2024-07-02T06:52:04.321514879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.321620 containerd[1272]: time="2024-07-02T06:52:04.321525719Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 06:52:04.321719 containerd[1272]: time="2024-07-02T06:52:04.321618203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.321719 containerd[1272]: time="2024-07-02T06:52:04.321631287Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.321719 containerd[1272]: time="2024-07-02T06:52:04.321641256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.321719 containerd[1272]: time="2024-07-02T06:52:04.321651906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.321719 containerd[1272]: time="2024-07-02T06:52:04.321666113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.321719 containerd[1272]: time="2024-07-02T06:52:04.321677083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.321719 containerd[1272]: time="2024-07-02T06:52:04.321687132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.321719 containerd[1272]: time="2024-07-02T06:52:04.321695999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 06:52:04.322010 containerd[1272]: time="2024-07-02T06:52:04.321928705Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 06:52:04.322010 containerd[1272]: time="2024-07-02T06:52:04.321999778Z" level=info msg="Connect containerd service" Jul 2 06:52:04.322212 containerd[1272]: time="2024-07-02T06:52:04.322025126Z" level=info msg="using legacy CRI server" Jul 2 06:52:04.322212 containerd[1272]: time="2024-07-02T06:52:04.322030496Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 06:52:04.322487 containerd[1272]: time="2024-07-02T06:52:04.322469639Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 06:52:04.323422 containerd[1272]: time="2024-07-02T06:52:04.323390106Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 06:52:04.323976 containerd[1272]: time="2024-07-02T06:52:04.323926712Z" level=info msg="Start subscribing containerd event" Jul 2 06:52:04.324045 containerd[1272]: time="2024-07-02T06:52:04.323991503Z" level=info msg="Start recovering state" Jul 2 06:52:04.324090 containerd[1272]: time="2024-07-02T06:52:04.324071604Z" level=info msg="Start event monitor" Jul 2 06:52:04.324123 containerd[1272]: time="2024-07-02T06:52:04.324091251Z" level=info msg="Start snapshots syncer" Jul 2 06:52:04.324123 containerd[1272]: time="2024-07-02T06:52:04.324105778Z" level=info msg="Start cni network conf syncer for default" Jul 2 06:52:04.324123 containerd[1272]: time="2024-07-02T06:52:04.324114715Z" level=info msg="Start streaming server" Jul 2 06:52:04.324259 containerd[1272]: time="2024-07-02T06:52:04.324227566Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 06:52:04.324371 containerd[1272]: time="2024-07-02T06:52:04.324341830Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 06:52:04.324456 containerd[1272]: time="2024-07-02T06:52:04.324442058Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 06:52:04.324804 containerd[1272]: time="2024-07-02T06:52:04.324769302Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jul 2 06:52:04.325151 containerd[1272]: time="2024-07-02T06:52:04.325132844Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 06:52:04.325261 containerd[1272]: time="2024-07-02T06:52:04.325248451Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 06:52:04.325471 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 06:52:04.326315 containerd[1272]: time="2024-07-02T06:52:04.326296957Z" level=info msg="containerd successfully booted in 0.040786s" Jul 2 06:52:04.487093 tar[1271]: linux-amd64/LICENSE Jul 2 06:52:04.487208 tar[1271]: linux-amd64/README.md Jul 2 06:52:04.498638 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 06:52:04.689950 systemd-networkd[1104]: eth0: Gained IPv6LL Jul 2 06:52:04.691655 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 06:52:04.693134 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 06:52:04.700196 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 2 06:52:04.702419 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:52:04.704664 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 06:52:04.712487 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 2 06:52:04.712640 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 2 06:52:04.714272 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 06:52:04.718321 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 06:52:05.241545 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:52:05.751586 kubelet[1327]: E0702 06:52:05.748460 1327 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:52:05.753938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:52:05.754097 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:52:06.461495 sshd_keygen[1273]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 06:52:06.511963 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 06:52:06.525555 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 06:52:06.541536 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 06:52:06.541771 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 06:52:06.552660 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 06:52:06.573515 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 06:52:06.577582 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 06:52:06.581467 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 06:52:06.583218 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 06:52:06.584439 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 06:52:06.587948 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jul 2 06:52:06.596349 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 06:52:06.596566 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jul 2 06:52:06.598187 systemd[1]: Startup finished in 795ms (kernel) + 5.236s (initrd) + 5.375s (userspace) = 11.408s. Jul 2 06:52:12.945594 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 06:52:12.947013 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:51844.service - OpenSSH per-connection server daemon (10.0.0.1:51844). Jul 2 06:52:12.991240 sshd[1350]: Accepted publickey for core from 10.0.0.1 port 51844 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:52:12.992956 sshd[1350]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:52:12.999914 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 06:52:13.022295 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 06:52:13.024266 systemd-logind[1264]: New session 1 of user core. Jul 2 06:52:13.032869 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 06:52:13.041116 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 06:52:13.043628 (systemd)[1353]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:52:13.126161 systemd[1353]: Queued start job for default target default.target. Jul 2 06:52:13.137362 systemd[1353]: Reached target paths.target - Paths. Jul 2 06:52:13.137391 systemd[1353]: Reached target sockets.target - Sockets. Jul 2 06:52:13.137408 systemd[1353]: Reached target timers.target - Timers. Jul 2 06:52:13.137422 systemd[1353]: Reached target basic.target - Basic System. Jul 2 06:52:13.137482 systemd[1353]: Reached target default.target - Main User Target. Jul 2 06:52:13.137519 systemd[1353]: Startup finished in 87ms. Jul 2 06:52:13.137584 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 06:52:13.139046 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 06:52:13.201852 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:51852.service - OpenSSH per-connection server daemon (10.0.0.1:51852). Jul 2 06:52:13.235632 sshd[1362]: Accepted publickey for core from 10.0.0.1 port 51852 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:52:13.237139 sshd[1362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:52:13.241510 systemd-logind[1264]: New session 2 of user core. Jul 2 06:52:13.259113 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 06:52:13.316454 sshd[1362]: pam_unix(sshd:session): session closed for user core Jul 2 06:52:13.323568 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:51852.service: Deactivated successfully. Jul 2 06:52:13.324203 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 06:52:13.325014 systemd-logind[1264]: Session 2 logged out. Waiting for processes to exit. Jul 2 06:52:13.327158 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:51864.service - OpenSSH per-connection server daemon (10.0.0.1:51864). Jul 2 06:52:13.328079 systemd-logind[1264]: Removed session 2. Jul 2 06:52:13.361894 sshd[1368]: Accepted publickey for core from 10.0.0.1 port 51864 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:52:13.363241 sshd[1368]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:52:13.367247 systemd-logind[1264]: New session 3 of user core. Jul 2 06:52:13.377084 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 06:52:13.430184 sshd[1368]: pam_unix(sshd:session): session closed for user core Jul 2 06:52:13.439459 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:51864.service: Deactivated successfully. Jul 2 06:52:13.440145 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 06:52:13.440770 systemd-logind[1264]: Session 3 logged out. Waiting for processes to exit. Jul 2 06:52:13.442468 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:51876.service - OpenSSH per-connection server daemon (10.0.0.1:51876). Jul 2 06:52:13.443250 systemd-logind[1264]: Removed session 3. Jul 2 06:52:13.474893 sshd[1374]: Accepted publickey for core from 10.0.0.1 port 51876 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:52:13.476297 sshd[1374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:52:13.479865 systemd-logind[1264]: New session 4 of user core. Jul 2 06:52:13.495038 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 06:52:13.548437 sshd[1374]: pam_unix(sshd:session): session closed for user core Jul 2 06:52:13.558215 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:51876.service: Deactivated successfully. Jul 2 06:52:13.558810 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 06:52:13.559342 systemd-logind[1264]: Session 4 logged out. Waiting for processes to exit. Jul 2 06:52:13.560804 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:51888.service - OpenSSH per-connection server daemon (10.0.0.1:51888). Jul 2 06:52:13.561480 systemd-logind[1264]: Removed session 4. Jul 2 06:52:13.593194 sshd[1380]: Accepted publickey for core from 10.0.0.1 port 51888 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:52:13.594563 sshd[1380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:52:13.598732 systemd-logind[1264]: New session 5 of user core. Jul 2 06:52:13.604914 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 06:52:13.661891 sudo[1383]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 06:52:13.662126 sudo[1383]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:52:13.676148 sudo[1383]: pam_unix(sudo:session): session closed for user root Jul 2 06:52:13.677752 sshd[1380]: pam_unix(sshd:session): session closed for user core Jul 2 06:52:13.688670 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:51888.service: Deactivated successfully. Jul 2 06:52:13.689216 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 06:52:13.689669 systemd-logind[1264]: Session 5 logged out. Waiting for processes to exit. Jul 2 06:52:13.690856 systemd[1]: Started sshd@5-10.0.0.35:22-10.0.0.1:51904.service - OpenSSH per-connection server daemon (10.0.0.1:51904). Jul 2 06:52:13.691611 systemd-logind[1264]: Removed session 5. Jul 2 06:52:13.722085 sshd[1387]: Accepted publickey for core from 10.0.0.1 port 51904 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:52:13.723244 sshd[1387]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:52:13.726684 systemd-logind[1264]: New session 6 of user core. Jul 2 06:52:13.734972 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 06:52:13.790694 sudo[1391]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 06:52:13.791054 sudo[1391]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:52:13.794397 sudo[1391]: pam_unix(sudo:session): session closed for user root Jul 2 06:52:13.799567 sudo[1390]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 06:52:13.799841 sudo[1390]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:52:13.814232 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 06:52:13.814000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 2 06:52:13.815358 auditctl[1394]: No rules Jul 2 06:52:13.815636 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 06:52:13.815806 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 06:52:13.816134 kernel: kauditd_printk_skb: 142 callbacks suppressed Jul 2 06:52:13.816179 kernel: audit: type=1305 audit(1719903133.814:186): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 2 06:52:13.817298 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 06:52:13.814000 audit[1394]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc41d160a0 a2=420 a3=0 items=0 ppid=1 pid=1394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:13.821144 kernel: audit: type=1300 audit(1719903133.814:186): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc41d160a0 a2=420 a3=0 items=0 ppid=1 pid=1394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:13.821260 kernel: audit: type=1327 audit(1719903133.814:186): proctitle=2F7362696E2F617564697463746C002D44 Jul 2 06:52:13.814000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 2 06:52:13.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:13.824498 kernel: audit: type=1131 audit(1719903133.815:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:13.841171 augenrules[1411]: No rules Jul 2 06:52:13.841979 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 06:52:13.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:13.842931 sudo[1390]: pam_unix(sudo:session): session closed for user root Jul 2 06:52:13.842000 audit[1390]: USER_END pid=1390 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:52:13.844483 sshd[1387]: pam_unix(sshd:session): session closed for user core Jul 2 06:52:13.848445 kernel: audit: type=1130 audit(1719903133.841:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:13.848502 kernel: audit: type=1106 audit(1719903133.842:189): pid=1390 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:52:13.848527 kernel: audit: type=1104 audit(1719903133.842:190): pid=1390 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:52:13.842000 audit[1390]: CRED_DISP pid=1390 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:52:13.844000 audit[1387]: USER_END pid=1387 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:52:13.854969 kernel: audit: type=1106 audit(1719903133.844:191): pid=1387 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:52:13.855023 kernel: audit: type=1104 audit(1719903133.844:192): pid=1387 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:52:13.844000 audit[1387]: CRED_DISP pid=1387 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:52:13.874542 systemd[1]: sshd@5-10.0.0.35:22-10.0.0.1:51904.service: Deactivated successfully. Jul 2 06:52:13.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.35:22-10.0.0.1:51904 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:13.875128 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 06:52:13.875635 systemd-logind[1264]: Session 6 logged out. Waiting for processes to exit. Jul 2 06:52:13.877025 systemd[1]: Started sshd@6-10.0.0.35:22-10.0.0.1:51914.service - OpenSSH per-connection server daemon (10.0.0.1:51914). Jul 2 06:52:13.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.35:22-10.0.0.1:51914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:13.877872 kernel: audit: type=1131 audit(1719903133.873:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.35:22-10.0.0.1:51904 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:13.877943 systemd-logind[1264]: Removed session 6. Jul 2 06:52:13.911000 audit[1417]: USER_ACCT pid=1417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:52:13.912394 sshd[1417]: Accepted publickey for core from 10.0.0.1 port 51914 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:52:13.912000 audit[1417]: CRED_ACQ pid=1417 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:52:13.912000 audit[1417]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd602d6b0 a2=3 a3=7fd59da62480 items=0 ppid=1 pid=1417 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:13.912000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:52:13.913810 sshd[1417]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:52:13.917548 systemd-logind[1264]: New session 7 of user core. Jul 2 06:52:13.926925 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 06:52:13.931000 audit[1417]: USER_START pid=1417 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:52:13.933000 audit[1419]: CRED_ACQ pid=1419 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:52:13.980000 audit[1420]: USER_ACCT pid=1420 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:52:13.981000 audit[1420]: CRED_REFR pid=1420 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:52:13.981771 sudo[1420]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 06:52:13.982015 sudo[1420]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 06:52:13.983000 audit[1420]: USER_START pid=1420 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:52:14.086206 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 06:52:14.312872 dockerd[1430]: time="2024-07-02T06:52:14.312715156Z" level=info msg="Starting up" Jul 2 06:52:14.835804 dockerd[1430]: time="2024-07-02T06:52:14.835719666Z" level=info msg="Loading containers: start." Jul 2 06:52:14.890000 audit[1465]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:14.890000 audit[1465]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffe660a82f0 a2=0 a3=7f0802f31e90 items=0 ppid=1430 pid=1465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:14.890000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 2 06:52:14.892000 audit[1467]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1467 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:14.892000 audit[1467]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffed65ecc90 a2=0 a3=7f80a218ee90 items=0 ppid=1430 pid=1467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:14.892000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 2 06:52:14.894000 audit[1469]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1469 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:14.894000 audit[1469]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff362ac630 a2=0 a3=7f5948402e90 items=0 ppid=1430 pid=1469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:14.894000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 2 06:52:14.896000 audit[1471]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:14.896000 audit[1471]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffe9650aae0 a2=0 a3=7f9c44610e90 items=0 ppid=1430 pid=1471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:14.896000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 2 06:52:14.898000 audit[1473]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1473 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:14.898000 audit[1473]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fff2bde2c20 a2=0 a3=7ff9e7f49e90 items=0 ppid=1430 pid=1473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:14.898000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 2 06:52:14.900000 audit[1475]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:14.900000 audit[1475]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7fffa88bbef0 a2=0 a3=7efce62d8e90 items=0 ppid=1430 pid=1475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:14.900000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 2 06:52:14.975000 audit[1477]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:14.975000 audit[1477]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffb567ff90 a2=0 a3=7fd524841e90 items=0 ppid=1430 pid=1477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:14.975000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 2 06:52:14.977000 audit[1479]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:14.977000 audit[1479]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffdaff55630 a2=0 a3=7f243c337e90 items=0 ppid=1430 pid=1479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:14.977000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 2 06:52:14.979000 audit[1481]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1481 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:14.979000 audit[1481]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffddf25f260 a2=0 a3=7f0f4fa1ae90 items=0 ppid=1430 pid=1481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:14.979000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 06:52:15.076000 audit[1485]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:15.076000 audit[1485]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffed6a63a90 a2=0 a3=7f89d55b9e90 items=0 ppid=1430 pid=1485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:15.076000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 2 06:52:15.076000 audit[1486]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:15.076000 audit[1486]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fffa90f46e0 a2=0 a3=7f23e2869e90 items=0 ppid=1430 pid=1486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:15.076000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 06:52:15.085816 kernel: Initializing XFRM netlink socket Jul 2 06:52:15.118000 audit[1494]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:15.118000 audit[1494]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fff828dfc20 a2=0 a3=7f128b0c9e90 items=0 ppid=1430 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:15.118000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 2 06:52:15.142000 audit[1497]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:15.142000 audit[1497]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffdce5c6820 a2=0 a3=7fa58a8fde90 items=0 ppid=1430 pid=1497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:15.142000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 2 06:52:15.146000 audit[1501]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:15.146000 audit[1501]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffca1744a90 a2=0 a3=7f4db80c9e90 items=0 ppid=1430 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:15.146000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 2 06:52:15.148000 audit[1503]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:15.148000 audit[1503]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffc183e6010 a2=0 a3=7f9500941e90 items=0 ppid=1430 pid=1503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:15.148000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 2 06:52:15.150000 audit[1505]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:15.150000 audit[1505]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffd25a31150 a2=0 a3=7fbe42691e90 items=0 ppid=1430 pid=1505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:15.150000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 2 06:52:15.153000 audit[1507]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1507 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:15.153000 audit[1507]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7fff72655bd0 a2=0 a3=7f28ce814e90 items=0 ppid=1430 pid=1507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:15.153000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 2 06:52:15.155000 audit[1509]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:15.155000 audit[1509]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffe2c3f6500 a2=0 a3=7fc76f355e90 items=0 ppid=1430 pid=1509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:15.155000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 2 06:52:15.161000 audit[1512]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:15.161000 audit[1512]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffc07f4f070 a2=0 a3=7fb6515d2e90 items=0 ppid=1430 pid=1512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:15.161000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 2 06:52:15.163000 audit[1514]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:15.163000 audit[1514]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7fff8b4d7ca0 a2=0 a3=7f21e50f6e90 items=0 ppid=1430 pid=1514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:15.163000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 2 06:52:15.166000 audit[1516]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:15.166000 audit[1516]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffca02af8c0 a2=0 a3=7f855b67ae90 items=0 ppid=1430 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:15.166000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 2 06:52:15.168000 audit[1518]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:15.168000 audit[1518]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffcb007e000 a2=0 a3=7fdecd72ae90 items=0 ppid=1430 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:15.168000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 2 06:52:15.169946 systemd-networkd[1104]: docker0: Link UP Jul 2 06:52:15.179000 audit[1522]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:15.179000 audit[1522]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd541dea90 a2=0 a3=7f1e5ee37e90 items=0 ppid=1430 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:15.179000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 2 06:52:15.181000 audit[1523]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:52:15.181000 audit[1523]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7fff041e6a70 a2=0 a3=7f78dd2b5e90 items=0 ppid=1430 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:52:15.181000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 2 06:52:15.182315 dockerd[1430]: time="2024-07-02T06:52:15.182272416Z" level=info msg="Loading containers: done." Jul 2 06:52:15.225565 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck897291601-merged.mount: Deactivated successfully. Jul 2 06:52:15.232185 dockerd[1430]: time="2024-07-02T06:52:15.232136509Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 06:52:15.232404 dockerd[1430]: time="2024-07-02T06:52:15.232374776Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 06:52:15.232545 dockerd[1430]: time="2024-07-02T06:52:15.232518966Z" level=info msg="Daemon has completed initialization" Jul 2 06:52:15.265997 dockerd[1430]: time="2024-07-02T06:52:15.265919394Z" level=info msg="API listen on /run/docker.sock" Jul 2 06:52:15.268122 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 06:52:15.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:16.005176 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 06:52:16.005421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:52:16.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:16.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:16.016470 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:52:16.128840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:52:16.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:16.216280 containerd[1272]: time="2024-07-02T06:52:16.216223764Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 06:52:16.217063 kubelet[1570]: E0702 06:52:16.217027 1570 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:52:16.220031 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:52:16.220156 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:52:16.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:52:18.711610 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2894581272.mount: Deactivated successfully. Jul 2 06:52:22.220706 containerd[1272]: time="2024-07-02T06:52:22.220631371Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:22.224397 containerd[1272]: time="2024-07-02T06:52:22.224350035Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=35235837" Jul 2 06:52:22.229194 containerd[1272]: time="2024-07-02T06:52:22.229166167Z" level=info msg="ImageCreate event name:\"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:22.243699 containerd[1272]: time="2024-07-02T06:52:22.243662122Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:22.270974 containerd[1272]: time="2024-07-02T06:52:22.270928967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:22.272074 containerd[1272]: time="2024-07-02T06:52:22.272037927Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"35232637\" in 6.055764029s" Jul 2 06:52:22.272158 containerd[1272]: time="2024-07-02T06:52:22.272084785Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jul 2 06:52:22.291020 containerd[1272]: time="2024-07-02T06:52:22.290977756Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 06:52:25.059349 containerd[1272]: time="2024-07-02T06:52:25.059124269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:25.076762 containerd[1272]: time="2024-07-02T06:52:25.076689129Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=32069747" Jul 2 06:52:25.088851 containerd[1272]: time="2024-07-02T06:52:25.088802406Z" level=info msg="ImageCreate event name:\"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:25.105960 containerd[1272]: time="2024-07-02T06:52:25.105882407Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:25.124832 containerd[1272]: time="2024-07-02T06:52:25.124681001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:25.126024 containerd[1272]: time="2024-07-02T06:52:25.125950742Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"33590639\" in 2.834922452s" Jul 2 06:52:25.126024 containerd[1272]: time="2024-07-02T06:52:25.126014211Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jul 2 06:52:25.144623 containerd[1272]: time="2024-07-02T06:52:25.144576402Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 06:52:26.471041 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 06:52:26.472192 kernel: kauditd_printk_skb: 88 callbacks suppressed Jul 2 06:52:26.472240 kernel: audit: type=1130 audit(1719903146.470:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:26.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:26.471273 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:52:26.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:26.477253 kernel: audit: type=1131 audit(1719903146.470:233): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:26.489286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:52:26.577696 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:52:26.577000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:26.601809 kernel: audit: type=1130 audit(1719903146.577:234): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:26.639293 kubelet[1657]: E0702 06:52:26.639153 1657 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:52:26.641693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:52:26.641852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:52:26.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:52:26.652843 kernel: audit: type=1131 audit(1719903146.641:235): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:52:28.577543 containerd[1272]: time="2024-07-02T06:52:28.577469422Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:28.597961 containerd[1272]: time="2024-07-02T06:52:28.597905186Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=17153803" Jul 2 06:52:28.638252 containerd[1272]: time="2024-07-02T06:52:28.638163575Z" level=info msg="ImageCreate event name:\"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:28.688842 containerd[1272]: time="2024-07-02T06:52:28.688797031Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:28.771536 containerd[1272]: time="2024-07-02T06:52:28.771439826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:28.772485 containerd[1272]: time="2024-07-02T06:52:28.772432708Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"18674713\" in 3.627809818s" Jul 2 06:52:28.772485 containerd[1272]: time="2024-07-02T06:52:28.772467042Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jul 2 06:52:28.794010 containerd[1272]: time="2024-07-02T06:52:28.793965690Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 06:52:32.349132 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1449889042.mount: Deactivated successfully. Jul 2 06:52:32.833530 containerd[1272]: time="2024-07-02T06:52:32.833484529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:32.852125 containerd[1272]: time="2024-07-02T06:52:32.852080704Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=28409334" Jul 2 06:52:32.875345 containerd[1272]: time="2024-07-02T06:52:32.875294588Z" level=info msg="ImageCreate event name:\"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:32.889614 containerd[1272]: time="2024-07-02T06:52:32.889566593Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:32.897352 containerd[1272]: time="2024-07-02T06:52:32.897318130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:32.897974 containerd[1272]: time="2024-07-02T06:52:32.897941900Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"28408353\" in 4.103912029s" Jul 2 06:52:32.898038 containerd[1272]: time="2024-07-02T06:52:32.897985963Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 06:52:32.916858 containerd[1272]: time="2024-07-02T06:52:32.916820003Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 06:52:35.162489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3865039426.mount: Deactivated successfully. Jul 2 06:52:36.784262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 06:52:36.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:36.784497 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:52:36.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:36.790329 kernel: audit: type=1130 audit(1719903156.783:236): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:36.790417 kernel: audit: type=1131 audit(1719903156.783:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:36.796218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:52:36.882296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:52:36.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:36.885805 kernel: audit: type=1130 audit(1719903156.881:238): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:37.792112 kubelet[1697]: E0702 06:52:37.792039 1697 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:52:37.794128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:52:37.794278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:52:37.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:52:37.797811 kernel: audit: type=1131 audit(1719903157.793:239): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:52:44.746587 containerd[1272]: time="2024-07-02T06:52:44.746479836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:44.767522 containerd[1272]: time="2024-07-02T06:52:44.767429283Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jul 2 06:52:44.780963 containerd[1272]: time="2024-07-02T06:52:44.780911881Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:44.803911 containerd[1272]: time="2024-07-02T06:52:44.803831216Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:44.842279 containerd[1272]: time="2024-07-02T06:52:44.842191987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:44.843305 containerd[1272]: time="2024-07-02T06:52:44.843247481Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 11.926380461s" Jul 2 06:52:44.843356 containerd[1272]: time="2024-07-02T06:52:44.843312626Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 06:52:44.866361 containerd[1272]: time="2024-07-02T06:52:44.866322354Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 06:52:46.598569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1850282521.mount: Deactivated successfully. Jul 2 06:52:46.723156 containerd[1272]: time="2024-07-02T06:52:46.723067473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:46.745626 containerd[1272]: time="2024-07-02T06:52:46.745541550Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jul 2 06:52:46.772765 containerd[1272]: time="2024-07-02T06:52:46.772697782Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:46.787705 containerd[1272]: time="2024-07-02T06:52:46.787645977Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:46.808693 containerd[1272]: time="2024-07-02T06:52:46.808622513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:46.809632 containerd[1272]: time="2024-07-02T06:52:46.809577711Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.943034876s" Jul 2 06:52:46.809632 containerd[1272]: time="2024-07-02T06:52:46.809620262Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 06:52:46.834535 containerd[1272]: time="2024-07-02T06:52:46.834489991Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 06:52:48.034113 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 06:52:48.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:48.034367 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:52:48.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:48.039658 kernel: audit: type=1130 audit(1719903168.032:240): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:48.039702 kernel: audit: type=1131 audit(1719903168.032:241): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:48.045141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:52:48.134866 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:52:48.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:48.145813 kernel: audit: type=1130 audit(1719903168.133:242): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:48.199209 kubelet[1767]: E0702 06:52:48.199145 1767 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:52:48.201390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:52:48.201552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:52:48.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:52:48.204818 kernel: audit: type=1131 audit(1719903168.200:243): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:52:49.224482 update_engine[1267]: I0702 06:52:49.224390 1267 update_attempter.cc:509] Updating boot flags... Jul 2 06:52:49.359816 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1782) Jul 2 06:52:49.404050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2483820016.mount: Deactivated successfully. Jul 2 06:52:49.408807 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1781) Jul 2 06:52:49.441677 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1781) Jul 2 06:52:57.653267 containerd[1272]: time="2024-07-02T06:52:57.653171649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:57.660969 containerd[1272]: time="2024-07-02T06:52:57.660913588Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651625" Jul 2 06:52:57.673208 containerd[1272]: time="2024-07-02T06:52:57.673157513Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:57.693691 containerd[1272]: time="2024-07-02T06:52:57.693634386Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:57.721422 containerd[1272]: time="2024-07-02T06:52:57.721385384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:52:57.723030 containerd[1272]: time="2024-07-02T06:52:57.722986139Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 10.888448138s" Jul 2 06:52:57.723110 containerd[1272]: time="2024-07-02T06:52:57.723031845Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 06:52:58.284073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 06:52:58.284329 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:52:58.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:58.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:58.290685 kernel: audit: type=1130 audit(1719903178.283:244): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:58.290764 kernel: audit: type=1131 audit(1719903178.283:245): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:58.294171 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:52:58.389145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:52:58.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:58.392810 kernel: audit: type=1130 audit(1719903178.388:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:58.428367 kubelet[1915]: E0702 06:52:58.428305 1915 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 06:52:58.430743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 06:52:58.430932 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 06:52:58.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:52:58.434804 kernel: audit: type=1131 audit(1719903178.430:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:52:59.876333 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:52:59.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:59.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:59.881670 kernel: audit: type=1130 audit(1719903179.875:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:59.881707 kernel: audit: type=1131 audit(1719903179.875:249): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:52:59.886088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:52:59.898843 systemd[1]: Reloading. Jul 2 06:53:00.414698 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 06:53:00.476000 audit: BPF prog-id=41 op=LOAD Jul 2 06:53:00.476000 audit: BPF prog-id=27 op=UNLOAD Jul 2 06:53:00.476000 audit: BPF prog-id=42 op=LOAD Jul 2 06:53:00.479532 kernel: audit: type=1334 audit(1719903180.476:250): prog-id=41 op=LOAD Jul 2 06:53:00.479570 kernel: audit: type=1334 audit(1719903180.476:251): prog-id=27 op=UNLOAD Jul 2 06:53:00.479586 kernel: audit: type=1334 audit(1719903180.476:252): prog-id=42 op=LOAD Jul 2 06:53:00.479600 kernel: audit: type=1334 audit(1719903180.476:253): prog-id=43 op=LOAD Jul 2 06:53:00.476000 audit: BPF prog-id=43 op=LOAD Jul 2 06:53:00.476000 audit: BPF prog-id=28 op=UNLOAD Jul 2 06:53:00.476000 audit: BPF prog-id=29 op=UNLOAD Jul 2 06:53:00.477000 audit: BPF prog-id=44 op=LOAD Jul 2 06:53:00.477000 audit: BPF prog-id=38 op=UNLOAD Jul 2 06:53:00.477000 audit: BPF prog-id=45 op=LOAD Jul 2 06:53:00.477000 audit: BPF prog-id=46 op=LOAD Jul 2 06:53:00.477000 audit: BPF prog-id=39 op=UNLOAD Jul 2 06:53:00.477000 audit: BPF prog-id=40 op=UNLOAD Jul 2 06:53:00.478000 audit: BPF prog-id=47 op=LOAD Jul 2 06:53:00.478000 audit: BPF prog-id=36 op=UNLOAD Jul 2 06:53:00.478000 audit: BPF prog-id=48 op=LOAD Jul 2 06:53:00.478000 audit: BPF prog-id=49 op=LOAD Jul 2 06:53:00.478000 audit: BPF prog-id=30 op=UNLOAD Jul 2 06:53:00.478000 audit: BPF prog-id=31 op=UNLOAD Jul 2 06:53:00.480000 audit: BPF prog-id=50 op=LOAD Jul 2 06:53:00.480000 audit: BPF prog-id=32 op=UNLOAD Jul 2 06:53:00.480000 audit: BPF prog-id=51 op=LOAD Jul 2 06:53:00.480000 audit: BPF prog-id=52 op=LOAD Jul 2 06:53:00.480000 audit: BPF prog-id=33 op=UNLOAD Jul 2 06:53:00.480000 audit: BPF prog-id=34 op=UNLOAD Jul 2 06:53:00.481000 audit: BPF prog-id=53 op=LOAD Jul 2 06:53:00.481000 audit: BPF prog-id=37 op=UNLOAD Jul 2 06:53:00.481000 audit: BPF prog-id=54 op=LOAD Jul 2 06:53:00.481000 audit: BPF prog-id=35 op=UNLOAD Jul 2 06:53:00.502550 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 06:53:00.502611 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 06:53:00.502806 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:53:00.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 2 06:53:00.504472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:53:00.615455 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:53:00.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:53:00.654683 kubelet[1991]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:53:00.654683 kubelet[1991]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 06:53:00.654683 kubelet[1991]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:53:00.655125 kubelet[1991]: I0702 06:53:00.654728 1991 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 06:53:01.028130 kubelet[1991]: I0702 06:53:01.028068 1991 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 06:53:01.028130 kubelet[1991]: I0702 06:53:01.028104 1991 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 06:53:01.028362 kubelet[1991]: I0702 06:53:01.028341 1991 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 06:53:01.064342 kubelet[1991]: E0702 06:53:01.064296 1991 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:01.065874 kubelet[1991]: I0702 06:53:01.065826 1991 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 06:53:01.092128 kubelet[1991]: I0702 06:53:01.092070 1991 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 06:53:01.097805 kubelet[1991]: I0702 06:53:01.097762 1991 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 06:53:01.098024 kubelet[1991]: I0702 06:53:01.097996 1991 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 06:53:01.100680 kubelet[1991]: I0702 06:53:01.100654 1991 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 06:53:01.100680 kubelet[1991]: I0702 06:53:01.100676 1991 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 06:53:01.100843 kubelet[1991]: I0702 06:53:01.100821 1991 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:53:01.100952 kubelet[1991]: I0702 06:53:01.100933 1991 kubelet.go:396] "Attempting to sync node with API server" Jul 2 06:53:01.100952 kubelet[1991]: I0702 06:53:01.100953 1991 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 06:53:01.101021 kubelet[1991]: I0702 06:53:01.100981 1991 kubelet.go:312] "Adding apiserver pod source" Jul 2 06:53:01.101021 kubelet[1991]: I0702 06:53:01.100997 1991 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 06:53:01.101449 kubelet[1991]: W0702 06:53:01.101406 1991 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:01.101449 kubelet[1991]: E0702 06:53:01.101449 1991 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:01.101666 kubelet[1991]: W0702 06:53:01.101624 1991 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:01.101666 kubelet[1991]: E0702 06:53:01.101660 1991 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:01.103331 kubelet[1991]: I0702 06:53:01.103319 1991 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jul 2 06:53:01.114902 kubelet[1991]: I0702 06:53:01.114875 1991 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 06:53:01.115756 kubelet[1991]: W0702 06:53:01.115736 1991 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 06:53:01.116443 kubelet[1991]: I0702 06:53:01.116425 1991 server.go:1256] "Started kubelet" Jul 2 06:53:01.117438 kubelet[1991]: I0702 06:53:01.117419 1991 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 06:53:01.119333 kubelet[1991]: I0702 06:53:01.119308 1991 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 06:53:01.119000 audit[2003]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:01.119000 audit[2003]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe75ea43c0 a2=0 a3=7f06a6fb1e90 items=0 ppid=1991 pid=2003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:01.119000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 06:53:01.120477 kubelet[1991]: I0702 06:53:01.120271 1991 server.go:461] "Adding debug handlers to kubelet server" Jul 2 06:53:01.120000 audit[2004]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:01.120000 audit[2004]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd4b8aac40 a2=0 a3=7fdd6a364e90 items=0 ppid=1991 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:01.120000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 2 06:53:01.121521 kubelet[1991]: I0702 06:53:01.121475 1991 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 06:53:01.121934 kubelet[1991]: I0702 06:53:01.121631 1991 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 06:53:01.121934 kubelet[1991]: I0702 06:53:01.121681 1991 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 06:53:01.121934 kubelet[1991]: I0702 06:53:01.121724 1991 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 06:53:01.121934 kubelet[1991]: I0702 06:53:01.121819 1991 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 06:53:01.122357 kubelet[1991]: W0702 06:53:01.122174 1991 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:01.122357 kubelet[1991]: E0702 06:53:01.122224 1991 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:01.122974 kubelet[1991]: E0702 06:53:01.122948 1991 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 06:53:01.123125 kubelet[1991]: I0702 06:53:01.123103 1991 factory.go:221] Registration of the systemd container factory successfully Jul 2 06:53:01.123280 kubelet[1991]: I0702 06:53:01.123185 1991 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 06:53:01.124263 kubelet[1991]: E0702 06:53:01.124127 1991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="200ms" Jul 2 06:53:01.124000 audit[2006]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:01.124000 audit[2006]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe7b68c9e0 a2=0 a3=7f2d5ec7ae90 items=0 ppid=1991 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:01.124000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 06:53:01.126436 kubelet[1991]: I0702 06:53:01.125531 1991 factory.go:221] Registration of the containerd container factory successfully Jul 2 06:53:01.127000 audit[2008]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2008 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:01.127000 audit[2008]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffc9631a730 a2=0 a3=7f8162d89e90 items=0 ppid=1991 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:01.127000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 06:53:01.135665 kubelet[1991]: E0702 06:53:01.135627 1991 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.35:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.35:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.17de52d4ae81d623 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-07-02 06:53:01.116397091 +0000 UTC m=+0.497312157,LastTimestamp:2024-07-02 06:53:01.116397091 +0000 UTC m=+0.497312157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 2 06:53:01.137910 kubelet[1991]: I0702 06:53:01.137889 1991 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 06:53:01.137910 kubelet[1991]: I0702 06:53:01.137908 1991 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 06:53:01.138037 kubelet[1991]: I0702 06:53:01.137924 1991 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:53:01.137000 audit[2013]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:01.137000 audit[2013]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc7b7f3e50 a2=0 a3=7facbbe0fe90 items=0 ppid=1991 pid=2013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:01.137000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 2 06:53:01.138379 kubelet[1991]: I0702 06:53:01.138339 1991 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 06:53:01.138000 audit[2014]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:01.138000 audit[2014]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff88963030 a2=0 a3=7fda96781e90 items=0 ppid=1991 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:01.138000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 2 06:53:01.139000 audit[2016]: NETFILTER_CFG table=mangle:32 family=10 entries=2 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:01.139000 audit[2016]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7fff4064e370 a2=0 a3=7f63c4ed2e90 items=0 ppid=1991 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:01.139000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 2 06:53:01.139000 audit[2018]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:01.139000 audit[2018]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcc9103950 a2=0 a3=7fa8dcde2e90 items=0 ppid=1991 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:01.139000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 2 06:53:01.140000 audit[2020]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=2020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:01.140000 audit[2020]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa8bdb3b0 a2=0 a3=7f9af164ee90 items=0 ppid=1991 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:01.140000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 2 06:53:01.141494 kubelet[1991]: I0702 06:53:01.140140 1991 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 06:53:01.141494 kubelet[1991]: I0702 06:53:01.140207 1991 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 06:53:01.141494 kubelet[1991]: I0702 06:53:01.140268 1991 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 06:53:01.141494 kubelet[1991]: E0702 06:53:01.140346 1991 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 06:53:01.141494 kubelet[1991]: W0702 06:53:01.140980 1991 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:01.141494 kubelet[1991]: E0702 06:53:01.141012 1991 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:01.140000 audit[2021]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=2021 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:01.140000 audit[2021]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd48071a80 a2=0 a3=7fde4c887e90 items=0 ppid=1991 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:01.140000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 2 06:53:01.141000 audit[2023]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=2023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:01.141000 audit[2023]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7fff4478e860 a2=0 a3=7f9994ec0e90 items=0 ppid=1991 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:01.141000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 2 06:53:01.142000 audit[2024]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=2024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:01.142000 audit[2024]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffddcafffa0 a2=0 a3=7f46624cae90 items=0 ppid=1991 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:01.142000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 2 06:53:01.223109 kubelet[1991]: I0702 06:53:01.223080 1991 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 06:53:01.223478 kubelet[1991]: E0702 06:53:01.223452 1991 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Jul 2 06:53:01.240622 kubelet[1991]: E0702 06:53:01.240579 1991 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 06:53:01.325463 kubelet[1991]: E0702 06:53:01.325434 1991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="400ms" Jul 2 06:53:01.424998 kubelet[1991]: I0702 06:53:01.424950 1991 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 06:53:01.425383 kubelet[1991]: E0702 06:53:01.425363 1991 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Jul 2 06:53:01.441516 kubelet[1991]: E0702 06:53:01.441472 1991 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 06:53:01.726483 kubelet[1991]: E0702 06:53:01.726344 1991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="800ms" Jul 2 06:53:01.747615 kubelet[1991]: I0702 06:53:01.747554 1991 policy_none.go:49] "None policy: Start" Jul 2 06:53:01.748610 kubelet[1991]: I0702 06:53:01.748595 1991 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 06:53:01.748686 kubelet[1991]: I0702 06:53:01.748617 1991 state_mem.go:35] "Initializing new in-memory state store" Jul 2 06:53:01.790565 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 06:53:01.809390 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 06:53:01.811873 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 06:53:01.821681 kubelet[1991]: I0702 06:53:01.821633 1991 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 06:53:01.822039 kubelet[1991]: I0702 06:53:01.822013 1991 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 06:53:01.823113 kubelet[1991]: E0702 06:53:01.823091 1991 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 2 06:53:01.826938 kubelet[1991]: I0702 06:53:01.826901 1991 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 06:53:01.827325 kubelet[1991]: E0702 06:53:01.827306 1991 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Jul 2 06:53:01.841730 kubelet[1991]: I0702 06:53:01.841677 1991 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 06:53:01.842911 kubelet[1991]: I0702 06:53:01.842893 1991 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 06:53:01.843758 kubelet[1991]: I0702 06:53:01.843740 1991 topology_manager.go:215] "Topology Admit Handler" podUID="ec90f202a412747a2b1d50a20dd4050d" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 06:53:01.848818 systemd[1]: Created slice kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice - libcontainer container kubepods-burstable-pod42b008e702ec2a5b396aebedf13804b4.slice. Jul 2 06:53:01.870467 systemd[1]: Created slice kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice - libcontainer container kubepods-burstable-pod593d08bacb1d5de22dcb8f5224a99e3c.slice. Jul 2 06:53:01.884504 systemd[1]: Created slice kubepods-burstable-podec90f202a412747a2b1d50a20dd4050d.slice - libcontainer container kubepods-burstable-podec90f202a412747a2b1d50a20dd4050d.slice. Jul 2 06:53:01.926549 kubelet[1991]: I0702 06:53:01.926495 1991 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:53:01.926872 kubelet[1991]: I0702 06:53:01.926846 1991 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:53:01.926923 kubelet[1991]: I0702 06:53:01.926894 1991 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:53:01.926949 kubelet[1991]: I0702 06:53:01.926928 1991 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec90f202a412747a2b1d50a20dd4050d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec90f202a412747a2b1d50a20dd4050d\") " pod="kube-system/kube-apiserver-localhost" Jul 2 06:53:01.926976 kubelet[1991]: I0702 06:53:01.926958 1991 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:53:01.927043 kubelet[1991]: I0702 06:53:01.926988 1991 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:53:01.927043 kubelet[1991]: I0702 06:53:01.927021 1991 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jul 2 06:53:01.927166 kubelet[1991]: I0702 06:53:01.927106 1991 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec90f202a412747a2b1d50a20dd4050d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec90f202a412747a2b1d50a20dd4050d\") " pod="kube-system/kube-apiserver-localhost" Jul 2 06:53:01.927222 kubelet[1991]: I0702 06:53:01.927209 1991 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec90f202a412747a2b1d50a20dd4050d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ec90f202a412747a2b1d50a20dd4050d\") " pod="kube-system/kube-apiserver-localhost" Jul 2 06:53:02.055781 kubelet[1991]: W0702 06:53:02.055703 1991 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:02.055781 kubelet[1991]: E0702 06:53:02.055771 1991 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:02.057166 kubelet[1991]: W0702 06:53:02.057108 1991 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:02.057214 kubelet[1991]: E0702 06:53:02.057165 1991 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:02.169313 kubelet[1991]: E0702 06:53:02.169272 1991 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:02.170016 containerd[1272]: time="2024-07-02T06:53:02.169943960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,}" Jul 2 06:53:02.183289 kubelet[1991]: E0702 06:53:02.183259 1991 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:02.183778 containerd[1272]: time="2024-07-02T06:53:02.183734675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,}" Jul 2 06:53:02.186959 kubelet[1991]: E0702 06:53:02.186923 1991 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:02.187272 containerd[1272]: time="2024-07-02T06:53:02.187244804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ec90f202a412747a2b1d50a20dd4050d,Namespace:kube-system,Attempt:0,}" Jul 2 06:53:02.458411 kubelet[1991]: W0702 06:53:02.458268 1991 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:02.458411 kubelet[1991]: E0702 06:53:02.458331 1991 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:02.474730 kubelet[1991]: W0702 06:53:02.474678 1991 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:02.474837 kubelet[1991]: E0702 06:53:02.474746 1991 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:02.527462 kubelet[1991]: E0702 06:53:02.527425 1991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="1.6s" Jul 2 06:53:02.629257 kubelet[1991]: I0702 06:53:02.629216 1991 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 06:53:02.629620 kubelet[1991]: E0702 06:53:02.629590 1991 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Jul 2 06:53:03.085006 kubelet[1991]: E0702 06:53:03.084954 1991 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:03.344005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029024119.mount: Deactivated successfully. Jul 2 06:53:03.378439 containerd[1272]: time="2024-07-02T06:53:03.378294943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:53:03.424729 containerd[1272]: time="2024-07-02T06:53:03.424644163Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 06:53:03.443529 containerd[1272]: time="2024-07-02T06:53:03.443477422Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:53:03.472985 containerd[1272]: time="2024-07-02T06:53:03.472917888Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:53:03.484205 containerd[1272]: time="2024-07-02T06:53:03.484143148Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:53:03.497258 containerd[1272]: time="2024-07-02T06:53:03.497179933Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 06:53:03.527735 containerd[1272]: time="2024-07-02T06:53:03.527685947Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 06:53:03.532522 containerd[1272]: time="2024-07-02T06:53:03.532458984Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:53:03.537794 containerd[1272]: time="2024-07-02T06:53:03.537752051Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:53:03.542976 containerd[1272]: time="2024-07-02T06:53:03.542911938Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:53:03.553325 containerd[1272]: time="2024-07-02T06:53:03.553265144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:53:03.554044 containerd[1272]: time="2024-07-02T06:53:03.554002774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.370150968s" Jul 2 06:53:03.572684 containerd[1272]: time="2024-07-02T06:53:03.572645836Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:53:03.588144 containerd[1272]: time="2024-07-02T06:53:03.588070301Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:53:03.621639 containerd[1272]: time="2024-07-02T06:53:03.621489737Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:53:03.622506 containerd[1272]: time="2024-07-02T06:53:03.622431802Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.452381192s" Jul 2 06:53:03.641016 containerd[1272]: time="2024-07-02T06:53:03.640967582Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 06:53:03.642182 containerd[1272]: time="2024-07-02T06:53:03.642139111Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.454832913s" Jul 2 06:53:03.758054 kubelet[1991]: W0702 06:53:03.758003 1991 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:03.758054 kubelet[1991]: E0702 06:53:03.758039 1991 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:03.814745 containerd[1272]: time="2024-07-02T06:53:03.814294936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:53:03.814745 containerd[1272]: time="2024-07-02T06:53:03.814713314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:53:03.814745 containerd[1272]: time="2024-07-02T06:53:03.814728573Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:53:03.814962 containerd[1272]: time="2024-07-02T06:53:03.814737450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:53:03.869133 systemd[1]: Started cri-containerd-d78223a67714abe1d8cdc1a1e163e1830ce536ce17e13722cfa91d8e3213a8c3.scope - libcontainer container d78223a67714abe1d8cdc1a1e163e1830ce536ce17e13722cfa91d8e3213a8c3. Jul 2 06:53:03.876000 audit: BPF prog-id=55 op=LOAD Jul 2 06:53:03.878589 kernel: kauditd_printk_skb: 62 callbacks suppressed Jul 2 06:53:03.878762 kernel: audit: type=1334 audit(1719903183.876:292): prog-id=55 op=LOAD Jul 2 06:53:03.877000 audit: BPF prog-id=56 op=LOAD Jul 2 06:53:03.880561 kernel: audit: type=1334 audit(1719903183.877:293): prog-id=56 op=LOAD Jul 2 06:53:03.880601 kernel: audit: type=1300 audit(1719903183.877:293): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2037 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:03.877000 audit[2046]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2037 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:03.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437383232336136373731346162653164386364633161316531363365 Jul 2 06:53:03.887001 kernel: audit: type=1327 audit(1719903183.877:293): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437383232336136373731346162653164386364633161316531363365 Jul 2 06:53:03.887056 kernel: audit: type=1334 audit(1719903183.877:294): prog-id=57 op=LOAD Jul 2 06:53:03.877000 audit: BPF prog-id=57 op=LOAD Jul 2 06:53:03.877000 audit[2046]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2037 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:03.890862 kernel: audit: type=1300 audit(1719903183.877:294): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2037 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:03.890913 kernel: audit: type=1327 audit(1719903183.877:294): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437383232336136373731346162653164386364633161316531363365 Jul 2 06:53:03.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437383232336136373731346162653164386364633161316531363365 Jul 2 06:53:03.877000 audit: BPF prog-id=57 op=UNLOAD Jul 2 06:53:03.877000 audit: BPF prog-id=56 op=UNLOAD Jul 2 06:53:03.895464 kernel: audit: type=1334 audit(1719903183.877:295): prog-id=57 op=UNLOAD Jul 2 06:53:03.895494 kernel: audit: type=1334 audit(1719903183.877:296): prog-id=56 op=UNLOAD Jul 2 06:53:03.895509 kernel: audit: type=1334 audit(1719903183.877:297): prog-id=58 op=LOAD Jul 2 06:53:03.877000 audit: BPF prog-id=58 op=LOAD Jul 2 06:53:03.877000 audit[2046]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2037 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:03.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6437383232336136373731346162653164386364633161316531363365 Jul 2 06:53:03.907394 containerd[1272]: time="2024-07-02T06:53:03.907290983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:53:03.907394 containerd[1272]: time="2024-07-02T06:53:03.907340687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:53:03.907623 containerd[1272]: time="2024-07-02T06:53:03.907358390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:53:03.908152 containerd[1272]: time="2024-07-02T06:53:03.908109376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:53:03.909742 containerd[1272]: time="2024-07-02T06:53:03.909705765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:593d08bacb1d5de22dcb8f5224a99e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d78223a67714abe1d8cdc1a1e163e1830ce536ce17e13722cfa91d8e3213a8c3\"" Jul 2 06:53:03.910900 kubelet[1991]: E0702 06:53:03.910878 1991 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:03.913271 containerd[1272]: time="2024-07-02T06:53:03.913239146Z" level=info msg="CreateContainer within sandbox \"d78223a67714abe1d8cdc1a1e163e1830ce536ce17e13722cfa91d8e3213a8c3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 06:53:03.925980 systemd[1]: Started cri-containerd-394e17dd8effc062a6c3d5fea14850226959d44d5a66267d3a4a849b90cc0a9a.scope - libcontainer container 394e17dd8effc062a6c3d5fea14850226959d44d5a66267d3a4a849b90cc0a9a. Jul 2 06:53:03.930543 containerd[1272]: time="2024-07-02T06:53:03.930399373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:53:03.931129 containerd[1272]: time="2024-07-02T06:53:03.931067853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:53:03.931129 containerd[1272]: time="2024-07-02T06:53:03.931092870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:53:03.931188 containerd[1272]: time="2024-07-02T06:53:03.931128918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:53:03.934000 audit: BPF prog-id=59 op=LOAD Jul 2 06:53:03.934000 audit: BPF prog-id=60 op=LOAD Jul 2 06:53:03.934000 audit[2088]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2073 pid=2088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:03.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339346531376464386566666330363261366333643566656131343835 Jul 2 06:53:03.934000 audit: BPF prog-id=61 op=LOAD Jul 2 06:53:03.934000 audit[2088]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2073 pid=2088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:03.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339346531376464386566666330363261366333643566656131343835 Jul 2 06:53:03.934000 audit: BPF prog-id=61 op=UNLOAD Jul 2 06:53:03.934000 audit: BPF prog-id=60 op=UNLOAD Jul 2 06:53:03.934000 audit: BPF prog-id=62 op=LOAD Jul 2 06:53:03.934000 audit[2088]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2073 pid=2088 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:03.934000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3339346531376464386566666330363261366333643566656131343835 Jul 2 06:53:03.950949 systemd[1]: Started cri-containerd-fa86039e052aab567b5678f1132776706b746bea09ce87314d3701cfaa08b3a1.scope - libcontainer container fa86039e052aab567b5678f1132776706b746bea09ce87314d3701cfaa08b3a1. Jul 2 06:53:03.960000 audit: BPF prog-id=63 op=LOAD Jul 2 06:53:03.960000 audit: BPF prog-id=64 op=LOAD Jul 2 06:53:03.960000 audit[2122]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2105 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:03.960000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661383630333965303532616162353637623536373866313133323737 Jul 2 06:53:03.961000 audit: BPF prog-id=65 op=LOAD Jul 2 06:53:03.961000 audit[2122]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2105 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:03.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661383630333965303532616162353637623536373866313133323737 Jul 2 06:53:03.961000 audit: BPF prog-id=65 op=UNLOAD Jul 2 06:53:03.961000 audit: BPF prog-id=64 op=UNLOAD Jul 2 06:53:03.961000 audit: BPF prog-id=66 op=LOAD Jul 2 06:53:03.961000 audit[2122]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2105 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:03.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661383630333965303532616162353637623536373866313133323737 Jul 2 06:53:03.962367 containerd[1272]: time="2024-07-02T06:53:03.962273356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:42b008e702ec2a5b396aebedf13804b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"394e17dd8effc062a6c3d5fea14850226959d44d5a66267d3a4a849b90cc0a9a\"" Jul 2 06:53:03.963147 kubelet[1991]: E0702 06:53:03.963129 1991 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:03.964961 containerd[1272]: time="2024-07-02T06:53:03.964914934Z" level=info msg="CreateContainer within sandbox \"394e17dd8effc062a6c3d5fea14850226959d44d5a66267d3a4a849b90cc0a9a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 06:53:03.983359 containerd[1272]: time="2024-07-02T06:53:03.983315900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ec90f202a412747a2b1d50a20dd4050d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa86039e052aab567b5678f1132776706b746bea09ce87314d3701cfaa08b3a1\"" Jul 2 06:53:03.983843 kubelet[1991]: E0702 06:53:03.983826 1991 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:03.985274 containerd[1272]: time="2024-07-02T06:53:03.985245799Z" level=info msg="CreateContainer within sandbox \"fa86039e052aab567b5678f1132776706b746bea09ce87314d3701cfaa08b3a1\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 06:53:04.128282 kubelet[1991]: E0702 06:53:04.128225 1991 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.35:6443: connect: connection refused" interval="3.2s" Jul 2 06:53:04.206345 kubelet[1991]: W0702 06:53:04.206218 1991 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:04.206345 kubelet[1991]: E0702 06:53:04.206284 1991 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:04.231935 kubelet[1991]: I0702 06:53:04.231892 1991 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 06:53:04.232360 kubelet[1991]: E0702 06:53:04.232331 1991 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.35:6443/api/v1/nodes\": dial tcp 10.0.0.35:6443: connect: connection refused" node="localhost" Jul 2 06:53:04.519206 kubelet[1991]: W0702 06:53:04.519023 1991 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:04.519206 kubelet[1991]: E0702 06:53:04.519115 1991 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.35:6443: connect: connection refused Jul 2 06:53:04.567385 containerd[1272]: time="2024-07-02T06:53:04.567300413Z" level=info msg="CreateContainer within sandbox \"394e17dd8effc062a6c3d5fea14850226959d44d5a66267d3a4a849b90cc0a9a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3cf1baa4af2d0d10738f27655b74339414195558151d176ae884a0380aac7301\"" Jul 2 06:53:04.568309 containerd[1272]: time="2024-07-02T06:53:04.568260162Z" level=info msg="StartContainer for \"3cf1baa4af2d0d10738f27655b74339414195558151d176ae884a0380aac7301\"" Jul 2 06:53:04.604918 systemd[1]: Started cri-containerd-3cf1baa4af2d0d10738f27655b74339414195558151d176ae884a0380aac7301.scope - libcontainer container 3cf1baa4af2d0d10738f27655b74339414195558151d176ae884a0380aac7301. Jul 2 06:53:04.616000 audit: BPF prog-id=67 op=LOAD Jul 2 06:53:04.616000 audit: BPF prog-id=68 op=LOAD Jul 2 06:53:04.616000 audit[2166]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2073 pid=2166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:04.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363663162616134616632643064313037333866323736353562373433 Jul 2 06:53:04.616000 audit: BPF prog-id=69 op=LOAD Jul 2 06:53:04.616000 audit[2166]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2073 pid=2166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:04.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363663162616134616632643064313037333866323736353562373433 Jul 2 06:53:04.616000 audit: BPF prog-id=69 op=UNLOAD Jul 2 06:53:04.616000 audit: BPF prog-id=68 op=UNLOAD Jul 2 06:53:04.616000 audit: BPF prog-id=70 op=LOAD Jul 2 06:53:04.616000 audit[2166]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2073 pid=2166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:04.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3363663162616134616632643064313037333866323736353562373433 Jul 2 06:53:04.683978 containerd[1272]: time="2024-07-02T06:53:04.683914971Z" level=info msg="StartContainer for \"3cf1baa4af2d0d10738f27655b74339414195558151d176ae884a0380aac7301\" returns successfully" Jul 2 06:53:04.684133 containerd[1272]: time="2024-07-02T06:53:04.683960147Z" level=info msg="CreateContainer within sandbox \"d78223a67714abe1d8cdc1a1e163e1830ce536ce17e13722cfa91d8e3213a8c3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"723d7dc8489906a377926bb043cfc6244303035c68f00ad97312c6575af1d231\"" Jul 2 06:53:04.684823 containerd[1272]: time="2024-07-02T06:53:04.684766175Z" level=info msg="StartContainer for \"723d7dc8489906a377926bb043cfc6244303035c68f00ad97312c6575af1d231\"" Jul 2 06:53:04.705911 systemd[1]: Started cri-containerd-723d7dc8489906a377926bb043cfc6244303035c68f00ad97312c6575af1d231.scope - libcontainer container 723d7dc8489906a377926bb043cfc6244303035c68f00ad97312c6575af1d231. Jul 2 06:53:04.715000 audit: BPF prog-id=71 op=LOAD Jul 2 06:53:04.715000 audit: BPF prog-id=72 op=LOAD Jul 2 06:53:04.715000 audit[2204]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2037 pid=2204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:04.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732336437646338343839393036613337373932366262303433636663 Jul 2 06:53:04.715000 audit: BPF prog-id=73 op=LOAD Jul 2 06:53:04.715000 audit[2204]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2037 pid=2204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:04.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732336437646338343839393036613337373932366262303433636663 Jul 2 06:53:04.715000 audit: BPF prog-id=73 op=UNLOAD Jul 2 06:53:04.715000 audit: BPF prog-id=72 op=UNLOAD Jul 2 06:53:04.715000 audit: BPF prog-id=74 op=LOAD Jul 2 06:53:04.715000 audit[2204]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2037 pid=2204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:04.715000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732336437646338343839393036613337373932366262303433636663 Jul 2 06:53:04.727085 containerd[1272]: time="2024-07-02T06:53:04.727034395Z" level=info msg="CreateContainer within sandbox \"fa86039e052aab567b5678f1132776706b746bea09ce87314d3701cfaa08b3a1\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5c8a5e65a260ebd6971ec3d4d29bbb27350b4bd5823db8044a2aeccada4b7545\"" Jul 2 06:53:04.727905 containerd[1272]: time="2024-07-02T06:53:04.727863809Z" level=info msg="StartContainer for \"5c8a5e65a260ebd6971ec3d4d29bbb27350b4bd5823db8044a2aeccada4b7545\"" Jul 2 06:53:04.755045 systemd[1]: Started cri-containerd-5c8a5e65a260ebd6971ec3d4d29bbb27350b4bd5823db8044a2aeccada4b7545.scope - libcontainer container 5c8a5e65a260ebd6971ec3d4d29bbb27350b4bd5823db8044a2aeccada4b7545. Jul 2 06:53:04.766000 audit: BPF prog-id=75 op=LOAD Jul 2 06:53:04.766000 audit: BPF prog-id=76 op=LOAD Jul 2 06:53:04.766000 audit[2230]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=2105 pid=2230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:04.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563386135653635613236306562643639373165633364346432396262 Jul 2 06:53:04.766000 audit: BPF prog-id=77 op=LOAD Jul 2 06:53:04.766000 audit[2230]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=2105 pid=2230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:04.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563386135653635613236306562643639373165633364346432396262 Jul 2 06:53:04.766000 audit: BPF prog-id=77 op=UNLOAD Jul 2 06:53:04.766000 audit: BPF prog-id=76 op=UNLOAD Jul 2 06:53:04.766000 audit: BPF prog-id=78 op=LOAD Jul 2 06:53:04.766000 audit[2230]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=2105 pid=2230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:04.766000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563386135653635613236306562643639373165633364346432396262 Jul 2 06:53:04.772963 containerd[1272]: time="2024-07-02T06:53:04.772875037Z" level=info msg="StartContainer for \"723d7dc8489906a377926bb043cfc6244303035c68f00ad97312c6575af1d231\" returns successfully" Jul 2 06:53:04.852856 containerd[1272]: time="2024-07-02T06:53:04.852770122Z" level=info msg="StartContainer for \"5c8a5e65a260ebd6971ec3d4d29bbb27350b4bd5823db8044a2aeccada4b7545\" returns successfully" Jul 2 06:53:05.153452 kubelet[1991]: E0702 06:53:05.153346 1991 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:05.155777 kubelet[1991]: E0702 06:53:05.155761 1991 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:05.157475 kubelet[1991]: E0702 06:53:05.157461 1991 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:05.457000 audit[2177]: AVC avc: denied { watch } for pid=2177 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:53:05.457000 audit[2177]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c0006a0000 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:53:05.457000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:53:05.457000 audit[2177]: AVC avc: denied { watch } for pid=2177 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:53:05.457000 audit[2177]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=7 a1=c00049b6c0 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:53:05.457000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:53:06.011000 audit[2251]: AVC avc: denied { watch } for pid=2251 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c246,c489 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:53:06.011000 audit[2251]: AVC avc: denied { watch } for pid=2251 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c246,c489 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:53:06.011000 audit[2251]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=41 a1=c005468360 a2=fc6 a3=0 items=0 ppid=2105 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c246,c489 key=(null) Jul 2 06:53:06.011000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3335002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jul 2 06:53:06.011000 audit[2251]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=40 a1=c004570240 a2=fc6 a3=0 items=0 ppid=2105 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c246,c489 key=(null) Jul 2 06:53:06.011000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3335002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jul 2 06:53:06.012000 audit[2251]: AVC avc: denied { watch } for pid=2251 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c246,c489 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:53:06.012000 audit[2251]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=42 a1=c0034ca330 a2=fc6 a3=0 items=0 ppid=2105 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c246,c489 key=(null) Jul 2 06:53:06.012000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3335002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jul 2 06:53:06.014000 audit[2251]: AVC avc: denied { watch } for pid=2251 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7761 scontext=system_u:system_r:container_t:s0:c246,c489 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:53:06.014000 audit[2251]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=4d a1=c003577cb0 a2=fc6 a3=0 items=0 ppid=2105 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c246,c489 key=(null) Jul 2 06:53:06.014000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3335002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jul 2 06:53:06.015000 audit[2251]: AVC avc: denied { watch } for pid=2251 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c246,c489 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:53:06.015000 audit[2251]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=55 a1=c007958580 a2=fc6 a3=0 items=0 ppid=2105 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c246,c489 key=(null) Jul 2 06:53:06.015000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3335002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jul 2 06:53:06.015000 audit[2251]: AVC avc: denied { watch } for pid=2251 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c246,c489 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:53:06.015000 audit[2251]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=56 a1=c005469290 a2=fc6 a3=0 items=0 ppid=2105 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c246,c489 key=(null) Jul 2 06:53:06.015000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3335002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jul 2 06:53:06.160007 kubelet[1991]: E0702 06:53:06.159980 1991 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:06.160461 kubelet[1991]: E0702 06:53:06.160117 1991 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:06.160461 kubelet[1991]: E0702 06:53:06.160195 1991 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:06.478523 kubelet[1991]: E0702 06:53:06.478483 1991 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 2 06:53:06.875164 kubelet[1991]: E0702 06:53:06.875112 1991 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 2 06:53:07.161590 kubelet[1991]: E0702 06:53:07.161485 1991 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:07.321298 kubelet[1991]: E0702 06:53:07.321250 1991 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 2 06:53:07.331510 kubelet[1991]: E0702 06:53:07.331468 1991 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 2 06:53:07.434293 kubelet[1991]: I0702 06:53:07.434176 1991 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 06:53:07.471066 kubelet[1991]: I0702 06:53:07.471013 1991 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 06:53:07.479391 kubelet[1991]: E0702 06:53:07.479349 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:07.579963 kubelet[1991]: E0702 06:53:07.579925 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:07.680561 kubelet[1991]: E0702 06:53:07.680499 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:07.781227 kubelet[1991]: E0702 06:53:07.781111 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:07.881835 kubelet[1991]: E0702 06:53:07.881769 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:07.982886 kubelet[1991]: E0702 06:53:07.982839 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:08.083445 kubelet[1991]: E0702 06:53:08.083366 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:08.184243 kubelet[1991]: E0702 06:53:08.184201 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:08.284903 kubelet[1991]: E0702 06:53:08.284855 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:08.385483 kubelet[1991]: E0702 06:53:08.385362 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:08.485898 kubelet[1991]: E0702 06:53:08.485840 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:08.586720 kubelet[1991]: E0702 06:53:08.586652 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:08.687733 kubelet[1991]: E0702 06:53:08.687578 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:08.788172 kubelet[1991]: E0702 06:53:08.788111 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:08.888626 kubelet[1991]: E0702 06:53:08.888565 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:08.989282 kubelet[1991]: E0702 06:53:08.989149 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:09.089969 kubelet[1991]: E0702 06:53:09.089914 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:09.190026 kubelet[1991]: E0702 06:53:09.189982 1991 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:10.108206 kubelet[1991]: I0702 06:53:10.108141 1991 apiserver.go:52] "Watching apiserver" Jul 2 06:53:10.121943 kubelet[1991]: I0702 06:53:10.121880 1991 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 06:53:11.693355 systemd[1]: Reloading. Jul 2 06:53:11.823879 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 06:53:11.926735 kernel: kauditd_printk_skb: 86 callbacks suppressed Jul 2 06:53:11.926888 kernel: audit: type=1334 audit(1719903191.915:336): prog-id=79 op=LOAD Jul 2 06:53:11.915000 audit: BPF prog-id=79 op=LOAD Jul 2 06:53:11.930651 kernel: audit: type=1334 audit(1719903191.915:337): prog-id=41 op=UNLOAD Jul 2 06:53:11.930702 kernel: audit: type=1334 audit(1719903191.915:338): prog-id=80 op=LOAD Jul 2 06:53:11.930733 kernel: audit: type=1334 audit(1719903191.915:339): prog-id=81 op=LOAD Jul 2 06:53:11.930749 kernel: audit: type=1334 audit(1719903191.915:340): prog-id=42 op=UNLOAD Jul 2 06:53:11.915000 audit: BPF prog-id=41 op=UNLOAD Jul 2 06:53:11.915000 audit: BPF prog-id=80 op=LOAD Jul 2 06:53:11.915000 audit: BPF prog-id=81 op=LOAD Jul 2 06:53:11.915000 audit: BPF prog-id=42 op=UNLOAD Jul 2 06:53:11.915000 audit: BPF prog-id=43 op=UNLOAD Jul 2 06:53:11.916000 audit: BPF prog-id=82 op=LOAD Jul 2 06:53:11.916000 audit: BPF prog-id=63 op=UNLOAD Jul 2 06:53:11.933571 kernel: audit: type=1334 audit(1719903191.915:341): prog-id=43 op=UNLOAD Jul 2 06:53:11.933621 kernel: audit: type=1334 audit(1719903191.916:342): prog-id=82 op=LOAD Jul 2 06:53:11.933641 kernel: audit: type=1334 audit(1719903191.916:343): prog-id=63 op=UNLOAD Jul 2 06:53:11.933658 kernel: audit: type=1334 audit(1719903191.917:344): prog-id=83 op=LOAD Jul 2 06:53:11.917000 audit: BPF prog-id=83 op=LOAD Jul 2 06:53:11.934437 kernel: audit: type=1334 audit(1719903191.917:345): prog-id=55 op=UNLOAD Jul 2 06:53:11.917000 audit: BPF prog-id=55 op=UNLOAD Jul 2 06:53:11.935181 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:53:11.918000 audit: BPF prog-id=84 op=LOAD Jul 2 06:53:11.918000 audit: BPF prog-id=44 op=UNLOAD Jul 2 06:53:11.918000 audit: BPF prog-id=85 op=LOAD Jul 2 06:53:11.918000 audit: BPF prog-id=86 op=LOAD Jul 2 06:53:11.918000 audit: BPF prog-id=45 op=UNLOAD Jul 2 06:53:11.918000 audit: BPF prog-id=46 op=UNLOAD Jul 2 06:53:11.918000 audit: BPF prog-id=87 op=LOAD Jul 2 06:53:11.918000 audit: BPF prog-id=47 op=UNLOAD Jul 2 06:53:11.919000 audit: BPF prog-id=88 op=LOAD Jul 2 06:53:11.919000 audit: BPF prog-id=75 op=UNLOAD Jul 2 06:53:11.919000 audit: BPF prog-id=89 op=LOAD Jul 2 06:53:11.919000 audit: BPF prog-id=90 op=LOAD Jul 2 06:53:11.919000 audit: BPF prog-id=48 op=UNLOAD Jul 2 06:53:11.919000 audit: BPF prog-id=49 op=UNLOAD Jul 2 06:53:11.920000 audit: BPF prog-id=91 op=LOAD Jul 2 06:53:11.920000 audit: BPF prog-id=67 op=UNLOAD Jul 2 06:53:11.921000 audit: BPF prog-id=92 op=LOAD Jul 2 06:53:11.921000 audit: BPF prog-id=50 op=UNLOAD Jul 2 06:53:11.921000 audit: BPF prog-id=93 op=LOAD Jul 2 06:53:11.921000 audit: BPF prog-id=94 op=LOAD Jul 2 06:53:11.921000 audit: BPF prog-id=51 op=UNLOAD Jul 2 06:53:11.921000 audit: BPF prog-id=52 op=UNLOAD Jul 2 06:53:11.922000 audit: BPF prog-id=95 op=LOAD Jul 2 06:53:11.922000 audit: BPF prog-id=59 op=UNLOAD Jul 2 06:53:11.923000 audit: BPF prog-id=96 op=LOAD Jul 2 06:53:11.923000 audit: BPF prog-id=53 op=UNLOAD Jul 2 06:53:11.923000 audit: BPF prog-id=97 op=LOAD Jul 2 06:53:11.923000 audit: BPF prog-id=54 op=UNLOAD Jul 2 06:53:11.925000 audit: BPF prog-id=98 op=LOAD Jul 2 06:53:11.925000 audit: BPF prog-id=71 op=UNLOAD Jul 2 06:53:11.954074 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 06:53:11.954246 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:53:11.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:53:11.969264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 06:53:12.122539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 06:53:12.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:53:12.179083 kubelet[2344]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:53:12.179083 kubelet[2344]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 06:53:12.179083 kubelet[2344]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 06:53:12.179575 kubelet[2344]: I0702 06:53:12.179100 2344 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 06:53:12.183625 kubelet[2344]: I0702 06:53:12.183583 2344 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 06:53:12.183625 kubelet[2344]: I0702 06:53:12.183622 2344 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 06:53:12.183889 kubelet[2344]: I0702 06:53:12.183872 2344 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 06:53:12.185450 kubelet[2344]: I0702 06:53:12.185433 2344 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 06:53:12.187310 kubelet[2344]: I0702 06:53:12.187274 2344 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 06:53:12.194695 kubelet[2344]: I0702 06:53:12.194668 2344 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 06:53:12.194908 kubelet[2344]: I0702 06:53:12.194888 2344 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 06:53:12.195074 kubelet[2344]: I0702 06:53:12.195052 2344 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 06:53:12.195148 kubelet[2344]: I0702 06:53:12.195077 2344 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 06:53:12.195148 kubelet[2344]: I0702 06:53:12.195087 2344 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 06:53:12.195148 kubelet[2344]: I0702 06:53:12.195113 2344 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:53:12.195225 kubelet[2344]: I0702 06:53:12.195188 2344 kubelet.go:396] "Attempting to sync node with API server" Jul 2 06:53:12.195225 kubelet[2344]: I0702 06:53:12.195203 2344 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 06:53:12.195225 kubelet[2344]: I0702 06:53:12.195224 2344 kubelet.go:312] "Adding apiserver pod source" Jul 2 06:53:12.195284 kubelet[2344]: I0702 06:53:12.195238 2344 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 06:53:12.195911 kubelet[2344]: I0702 06:53:12.195855 2344 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jul 2 06:53:12.196147 kubelet[2344]: I0702 06:53:12.196129 2344 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 06:53:12.198095 kubelet[2344]: I0702 06:53:12.196690 2344 server.go:1256] "Started kubelet" Jul 2 06:53:12.198095 kubelet[2344]: I0702 06:53:12.197525 2344 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 06:53:12.198095 kubelet[2344]: I0702 06:53:12.197658 2344 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 06:53:12.198095 kubelet[2344]: I0702 06:53:12.197977 2344 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 06:53:12.198982 kubelet[2344]: I0702 06:53:12.198929 2344 server.go:461] "Adding debug handlers to kubelet server" Jul 2 06:53:12.199332 kubelet[2344]: I0702 06:53:12.199305 2344 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 06:53:12.203581 kubelet[2344]: E0702 06:53:12.203549 2344 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 2 06:53:12.203645 kubelet[2344]: I0702 06:53:12.203621 2344 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 06:53:12.203816 kubelet[2344]: I0702 06:53:12.203775 2344 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 06:53:12.204063 kubelet[2344]: I0702 06:53:12.204022 2344 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 06:53:12.207809 kubelet[2344]: E0702 06:53:12.207726 2344 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 06:53:12.207922 kubelet[2344]: I0702 06:53:12.207904 2344 factory.go:221] Registration of the systemd container factory successfully Jul 2 06:53:12.208068 kubelet[2344]: I0702 06:53:12.208046 2344 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 06:53:12.209450 kubelet[2344]: I0702 06:53:12.209427 2344 factory.go:221] Registration of the containerd container factory successfully Jul 2 06:53:12.216314 kubelet[2344]: I0702 06:53:12.216262 2344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 06:53:12.218044 kubelet[2344]: I0702 06:53:12.218015 2344 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 06:53:12.218107 kubelet[2344]: I0702 06:53:12.218051 2344 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 06:53:12.218107 kubelet[2344]: I0702 06:53:12.218070 2344 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 06:53:12.218159 kubelet[2344]: E0702 06:53:12.218117 2344 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 06:53:12.238021 kubelet[2344]: I0702 06:53:12.237994 2344 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 06:53:12.238021 kubelet[2344]: I0702 06:53:12.238013 2344 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 06:53:12.238021 kubelet[2344]: I0702 06:53:12.238027 2344 state_mem.go:36] "Initialized new in-memory state store" Jul 2 06:53:12.238190 kubelet[2344]: I0702 06:53:12.238150 2344 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 06:53:12.238190 kubelet[2344]: I0702 06:53:12.238169 2344 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 06:53:12.238190 kubelet[2344]: I0702 06:53:12.238174 2344 policy_none.go:49] "None policy: Start" Jul 2 06:53:12.238608 kubelet[2344]: I0702 06:53:12.238592 2344 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 06:53:12.238646 kubelet[2344]: I0702 06:53:12.238612 2344 state_mem.go:35] "Initializing new in-memory state store" Jul 2 06:53:12.239023 kubelet[2344]: I0702 06:53:12.239000 2344 state_mem.go:75] "Updated machine memory state" Jul 2 06:53:12.242444 kubelet[2344]: I0702 06:53:12.242420 2344 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 06:53:12.242659 kubelet[2344]: I0702 06:53:12.242627 2344 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 06:53:12.307559 kubelet[2344]: I0702 06:53:12.307512 2344 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jul 2 06:53:12.319207 kubelet[2344]: I0702 06:53:12.319159 2344 topology_manager.go:215] "Topology Admit Handler" podUID="ec90f202a412747a2b1d50a20dd4050d" podNamespace="kube-system" podName="kube-apiserver-localhost" Jul 2 06:53:12.319342 kubelet[2344]: I0702 06:53:12.319259 2344 topology_manager.go:215] "Topology Admit Handler" podUID="42b008e702ec2a5b396aebedf13804b4" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jul 2 06:53:12.319342 kubelet[2344]: I0702 06:53:12.319289 2344 topology_manager.go:215] "Topology Admit Handler" podUID="593d08bacb1d5de22dcb8f5224a99e3c" podNamespace="kube-system" podName="kube-scheduler-localhost" Jul 2 06:53:12.404353 kubelet[2344]: I0702 06:53:12.404303 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec90f202a412747a2b1d50a20dd4050d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ec90f202a412747a2b1d50a20dd4050d\") " pod="kube-system/kube-apiserver-localhost" Jul 2 06:53:12.404353 kubelet[2344]: I0702 06:53:12.404345 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:53:12.404353 kubelet[2344]: I0702 06:53:12.404366 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:53:12.404591 kubelet[2344]: I0702 06:53:12.404383 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:53:12.404591 kubelet[2344]: I0702 06:53:12.404402 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:53:12.404591 kubelet[2344]: I0702 06:53:12.404418 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec90f202a412747a2b1d50a20dd4050d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec90f202a412747a2b1d50a20dd4050d\") " pod="kube-system/kube-apiserver-localhost" Jul 2 06:53:12.404591 kubelet[2344]: I0702 06:53:12.404434 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec90f202a412747a2b1d50a20dd4050d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ec90f202a412747a2b1d50a20dd4050d\") " pod="kube-system/kube-apiserver-localhost" Jul 2 06:53:12.404591 kubelet[2344]: I0702 06:53:12.404451 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/42b008e702ec2a5b396aebedf13804b4-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"42b008e702ec2a5b396aebedf13804b4\") " pod="kube-system/kube-controller-manager-localhost" Jul 2 06:53:12.404739 kubelet[2344]: I0702 06:53:12.404492 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/593d08bacb1d5de22dcb8f5224a99e3c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"593d08bacb1d5de22dcb8f5224a99e3c\") " pod="kube-system/kube-scheduler-localhost" Jul 2 06:53:12.445205 kubelet[2344]: I0702 06:53:12.445148 2344 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jul 2 06:53:12.445385 kubelet[2344]: I0702 06:53:12.445253 2344 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jul 2 06:53:12.722252 kubelet[2344]: E0702 06:53:12.722192 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:12.722252 kubelet[2344]: E0702 06:53:12.722262 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:12.722586 kubelet[2344]: E0702 06:53:12.722559 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:13.196600 kubelet[2344]: I0702 06:53:13.196556 2344 apiserver.go:52] "Watching apiserver" Jul 2 06:53:13.204381 kubelet[2344]: I0702 06:53:13.204348 2344 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 06:53:13.226193 kubelet[2344]: E0702 06:53:13.226163 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:13.226357 kubelet[2344]: E0702 06:53:13.226171 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:13.230600 kubelet[2344]: E0702 06:53:13.230556 2344 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 2 06:53:13.231072 kubelet[2344]: E0702 06:53:13.231058 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:13.243069 kubelet[2344]: I0702 06:53:13.243031 2344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.242987911 podStartE2EDuration="1.242987911s" podCreationTimestamp="2024-07-02 06:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:53:13.242776183 +0000 UTC m=+1.114577415" watchObservedRunningTime="2024-07-02 06:53:13.242987911 +0000 UTC m=+1.114789133" Jul 2 06:53:13.251068 kubelet[2344]: I0702 06:53:13.251021 2344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.2509782170000001 podStartE2EDuration="1.250978217s" podCreationTimestamp="2024-07-02 06:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:53:13.250898637 +0000 UTC m=+1.122699869" watchObservedRunningTime="2024-07-02 06:53:13.250978217 +0000 UTC m=+1.122779439" Jul 2 06:53:13.263282 kubelet[2344]: I0702 06:53:13.263246 2344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.263191022 podStartE2EDuration="1.263191022s" podCreationTimestamp="2024-07-02 06:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:53:13.257440588 +0000 UTC m=+1.129241820" watchObservedRunningTime="2024-07-02 06:53:13.263191022 +0000 UTC m=+1.134992254" Jul 2 06:53:14.227861 kubelet[2344]: E0702 06:53:14.227814 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:14.228726 kubelet[2344]: E0702 06:53:14.228692 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:15.229969 kubelet[2344]: E0702 06:53:15.229907 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:16.884562 sudo[1420]: pam_unix(sudo:session): session closed for user root Jul 2 06:53:16.883000 audit[1420]: USER_END pid=1420 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:53:16.883000 audit[1420]: CRED_DISP pid=1420 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 2 06:53:17.102893 sshd[1417]: pam_unix(sshd:session): session closed for user core Jul 2 06:53:17.103000 audit[1417]: USER_END pid=1417 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:17.104904 kernel: kauditd_printk_skb: 34 callbacks suppressed Jul 2 06:53:17.104961 kernel: audit: type=1106 audit(1719903197.103:380): pid=1417 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:17.106053 systemd[1]: sshd@6-10.0.0.35:22-10.0.0.1:51914.service: Deactivated successfully. Jul 2 06:53:17.106981 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 06:53:17.107143 systemd[1]: session-7.scope: Consumed 4.299s CPU time. Jul 2 06:53:17.107648 systemd-logind[1264]: Session 7 logged out. Waiting for processes to exit. Jul 2 06:53:17.108516 systemd-logind[1264]: Removed session 7. Jul 2 06:53:17.103000 audit[1417]: CRED_DISP pid=1417 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:17.112945 kernel: audit: type=1104 audit(1719903197.103:381): pid=1417 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:17.113007 kernel: audit: type=1131 audit(1719903197.105:382): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.35:22-10.0.0.1:51914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:53:17.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.35:22-10.0.0.1:51914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:53:18.267301 kubelet[2344]: E0702 06:53:18.267261 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:19.235329 kubelet[2344]: E0702 06:53:19.235295 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:21.053812 kubelet[2344]: E0702 06:53:21.053729 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:21.239729 kubelet[2344]: E0702 06:53:21.239700 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:21.829000 audit[2177]: AVC avc: denied { watch } for pid=2177 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="vda9" ino=520979 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jul 2 06:53:21.829000 audit[2177]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000d292c0 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:53:21.838378 kernel: audit: type=1400 audit(1719903201.829:383): avc: denied { watch } for pid=2177 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="vda9" ino=520979 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jul 2 06:53:21.838531 kernel: audit: type=1300 audit(1719903201.829:383): arch=c000003e syscall=254 success=no exit=-13 a0=9 a1=c000d292c0 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:53:21.838662 kernel: audit: type=1327 audit(1719903201.829:383): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:53:21.829000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:53:22.241733 kubelet[2344]: E0702 06:53:22.241580 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:23.780758 kubelet[2344]: E0702 06:53:23.780720 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:24.187000 audit[2177]: AVC avc: denied { watch } for pid=2177 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:53:24.209203 kernel: audit: type=1400 audit(1719903204.187:384): avc: denied { watch } for pid=2177 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:53:24.209366 kernel: audit: type=1400 audit(1719903204.187:385): avc: denied { watch } for pid=2177 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:53:24.187000 audit[2177]: AVC avc: denied { watch } for pid=2177 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:53:24.213249 kernel: audit: type=1300 audit(1719903204.187:385): arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000d144e0 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:53:24.187000 audit[2177]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c000d144e0 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:53:24.216690 kernel: audit: type=1327 audit(1719903204.187:385): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:53:24.187000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:53:24.187000 audit[2177]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000dd8080 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:53:24.224721 kernel: audit: type=1300 audit(1719903204.187:384): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000dd8080 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:53:24.224793 kernel: audit: type=1327 audit(1719903204.187:384): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:53:24.187000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:53:24.187000 audit[2177]: AVC avc: denied { watch } for pid=2177 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:53:24.227840 kernel: audit: type=1400 audit(1719903204.187:386): avc: denied { watch } for pid=2177 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:53:24.187000 audit[2177]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000d14540 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:53:24.187000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:53:24.296412 kernel: audit: type=1300 audit(1719903204.187:386): arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000d14540 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:53:24.296585 kernel: audit: type=1327 audit(1719903204.187:386): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:53:24.296612 kernel: audit: type=1400 audit(1719903204.187:387): avc: denied { watch } for pid=2177 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:53:24.187000 audit[2177]: AVC avc: denied { watch } for pid=2177 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:53:24.187000 audit[2177]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c001166340 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:53:24.187000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:53:24.540596 kubelet[2344]: I0702 06:53:24.540570 2344 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 06:53:24.541023 containerd[1272]: time="2024-07-02T06:53:24.540986591Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 06:53:24.541336 kubelet[2344]: I0702 06:53:24.541141 2344 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 06:53:25.239490 kubelet[2344]: I0702 06:53:25.239425 2344 topology_manager.go:215] "Topology Admit Handler" podUID="ff951d74-09dd-4dc6-90ee-9b937b938959" podNamespace="kube-system" podName="kube-proxy-slq97" Jul 2 06:53:25.245578 systemd[1]: Created slice kubepods-besteffort-podff951d74_09dd_4dc6_90ee_9b937b938959.slice - libcontainer container kubepods-besteffort-podff951d74_09dd_4dc6_90ee_9b937b938959.slice. Jul 2 06:53:25.289656 kubelet[2344]: I0702 06:53:25.289597 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ff951d74-09dd-4dc6-90ee-9b937b938959-kube-proxy\") pod \"kube-proxy-slq97\" (UID: \"ff951d74-09dd-4dc6-90ee-9b937b938959\") " pod="kube-system/kube-proxy-slq97" Jul 2 06:53:25.289656 kubelet[2344]: I0702 06:53:25.289635 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff951d74-09dd-4dc6-90ee-9b937b938959-lib-modules\") pod \"kube-proxy-slq97\" (UID: \"ff951d74-09dd-4dc6-90ee-9b937b938959\") " pod="kube-system/kube-proxy-slq97" Jul 2 06:53:25.289656 kubelet[2344]: I0702 06:53:25.289664 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff951d74-09dd-4dc6-90ee-9b937b938959-xtables-lock\") pod \"kube-proxy-slq97\" (UID: \"ff951d74-09dd-4dc6-90ee-9b937b938959\") " pod="kube-system/kube-proxy-slq97" Jul 2 06:53:25.290000 kubelet[2344]: I0702 06:53:25.289682 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dv9b7\" (UniqueName: \"kubernetes.io/projected/ff951d74-09dd-4dc6-90ee-9b937b938959-kube-api-access-dv9b7\") pod \"kube-proxy-slq97\" (UID: \"ff951d74-09dd-4dc6-90ee-9b937b938959\") " pod="kube-system/kube-proxy-slq97" Jul 2 06:53:25.462808 kubelet[2344]: I0702 06:53:25.462756 2344 topology_manager.go:215] "Topology Admit Handler" podUID="aa2633cf-fb02-4a12-82e5-a0602dd99ca3" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-wdrth" Jul 2 06:53:25.470105 systemd[1]: Created slice kubepods-besteffort-podaa2633cf_fb02_4a12_82e5_a0602dd99ca3.slice - libcontainer container kubepods-besteffort-podaa2633cf_fb02_4a12_82e5_a0602dd99ca3.slice. Jul 2 06:53:25.491054 kubelet[2344]: I0702 06:53:25.490958 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnb8r\" (UniqueName: \"kubernetes.io/projected/aa2633cf-fb02-4a12-82e5-a0602dd99ca3-kube-api-access-jnb8r\") pod \"tigera-operator-76c4974c85-wdrth\" (UID: \"aa2633cf-fb02-4a12-82e5-a0602dd99ca3\") " pod="tigera-operator/tigera-operator-76c4974c85-wdrth" Jul 2 06:53:25.491054 kubelet[2344]: I0702 06:53:25.490995 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aa2633cf-fb02-4a12-82e5-a0602dd99ca3-var-lib-calico\") pod \"tigera-operator-76c4974c85-wdrth\" (UID: \"aa2633cf-fb02-4a12-82e5-a0602dd99ca3\") " pod="tigera-operator/tigera-operator-76c4974c85-wdrth" Jul 2 06:53:25.555523 kubelet[2344]: E0702 06:53:25.555489 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:25.556086 containerd[1272]: time="2024-07-02T06:53:25.556052751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-slq97,Uid:ff951d74-09dd-4dc6-90ee-9b937b938959,Namespace:kube-system,Attempt:0,}" Jul 2 06:53:25.772073 containerd[1272]: time="2024-07-02T06:53:25.771844521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:53:25.772073 containerd[1272]: time="2024-07-02T06:53:25.771929871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:53:25.772073 containerd[1272]: time="2024-07-02T06:53:25.771947976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:53:25.772073 containerd[1272]: time="2024-07-02T06:53:25.771958034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:53:25.772547 containerd[1272]: time="2024-07-02T06:53:25.772470797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-wdrth,Uid:aa2633cf-fb02-4a12-82e5-a0602dd99ca3,Namespace:tigera-operator,Attempt:0,}" Jul 2 06:53:25.795216 systemd[1]: Started cri-containerd-58d0ae11e8ddcb8d1d8c09b89a7aa65f6aa62a060a14e5ce0af8778e3a8a621f.scope - libcontainer container 58d0ae11e8ddcb8d1d8c09b89a7aa65f6aa62a060a14e5ce0af8778e3a8a621f. Jul 2 06:53:25.804942 containerd[1272]: time="2024-07-02T06:53:25.804807144Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:53:25.804942 containerd[1272]: time="2024-07-02T06:53:25.804898897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:53:25.804942 containerd[1272]: time="2024-07-02T06:53:25.804921700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:53:25.804942 containerd[1272]: time="2024-07-02T06:53:25.804936037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:53:25.804000 audit: BPF prog-id=99 op=LOAD Jul 2 06:53:25.805000 audit: BPF prog-id=100 op=LOAD Jul 2 06:53:25.805000 audit[2452]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2443 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:25.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538643061653131653864646362386431643863303962383961376161 Jul 2 06:53:25.805000 audit: BPF prog-id=101 op=LOAD Jul 2 06:53:25.805000 audit[2452]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2443 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:25.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538643061653131653864646362386431643863303962383961376161 Jul 2 06:53:25.805000 audit: BPF prog-id=101 op=UNLOAD Jul 2 06:53:25.805000 audit: BPF prog-id=100 op=UNLOAD Jul 2 06:53:25.805000 audit: BPF prog-id=102 op=LOAD Jul 2 06:53:25.805000 audit[2452]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2443 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:25.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538643061653131653864646362386431643863303962383961376161 Jul 2 06:53:25.823961 systemd[1]: Started cri-containerd-ed990e327296b4255bfcdf94e2f94bf2bfef93183fa0dd0edfa2e754f33c77c2.scope - libcontainer container ed990e327296b4255bfcdf94e2f94bf2bfef93183fa0dd0edfa2e754f33c77c2. Jul 2 06:53:25.824772 containerd[1272]: time="2024-07-02T06:53:25.824729503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-slq97,Uid:ff951d74-09dd-4dc6-90ee-9b937b938959,Namespace:kube-system,Attempt:0,} returns sandbox id \"58d0ae11e8ddcb8d1d8c09b89a7aa65f6aa62a060a14e5ce0af8778e3a8a621f\"" Jul 2 06:53:25.825834 kubelet[2344]: E0702 06:53:25.825578 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:25.827819 containerd[1272]: time="2024-07-02T06:53:25.827750665Z" level=info msg="CreateContainer within sandbox \"58d0ae11e8ddcb8d1d8c09b89a7aa65f6aa62a060a14e5ce0af8778e3a8a621f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 06:53:25.838000 audit: BPF prog-id=103 op=LOAD Jul 2 06:53:25.839000 audit: BPF prog-id=104 op=LOAD Jul 2 06:53:25.839000 audit[2486]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2472 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:25.839000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564393930653332373239366234323535626663646639346532663934 Jul 2 06:53:25.839000 audit: BPF prog-id=105 op=LOAD Jul 2 06:53:25.839000 audit[2486]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2472 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:25.839000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564393930653332373239366234323535626663646639346532663934 Jul 2 06:53:25.839000 audit: BPF prog-id=105 op=UNLOAD Jul 2 06:53:25.839000 audit: BPF prog-id=104 op=UNLOAD Jul 2 06:53:25.839000 audit: BPF prog-id=106 op=LOAD Jul 2 06:53:25.839000 audit[2486]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2472 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:25.839000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6564393930653332373239366234323535626663646639346532663934 Jul 2 06:53:25.845847 containerd[1272]: time="2024-07-02T06:53:25.845749992Z" level=info msg="CreateContainer within sandbox \"58d0ae11e8ddcb8d1d8c09b89a7aa65f6aa62a060a14e5ce0af8778e3a8a621f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a65c6d221400f3522466238ac177382934c9d21432bca35a8631a43344dd78f9\"" Jul 2 06:53:25.849106 containerd[1272]: time="2024-07-02T06:53:25.849049979Z" level=info msg="StartContainer for \"a65c6d221400f3522466238ac177382934c9d21432bca35a8631a43344dd78f9\"" Jul 2 06:53:25.864365 containerd[1272]: time="2024-07-02T06:53:25.864311655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-wdrth,Uid:aa2633cf-fb02-4a12-82e5-a0602dd99ca3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ed990e327296b4255bfcdf94e2f94bf2bfef93183fa0dd0edfa2e754f33c77c2\"" Jul 2 06:53:25.866099 containerd[1272]: time="2024-07-02T06:53:25.866057934Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jul 2 06:53:25.883944 systemd[1]: Started cri-containerd-a65c6d221400f3522466238ac177382934c9d21432bca35a8631a43344dd78f9.scope - libcontainer container a65c6d221400f3522466238ac177382934c9d21432bca35a8631a43344dd78f9. Jul 2 06:53:25.895000 audit: BPF prog-id=107 op=LOAD Jul 2 06:53:25.895000 audit[2524]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2443 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:25.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136356336643232313430306633353232343636323338616331373733 Jul 2 06:53:25.895000 audit: BPF prog-id=108 op=LOAD Jul 2 06:53:25.895000 audit[2524]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2443 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:25.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136356336643232313430306633353232343636323338616331373733 Jul 2 06:53:25.895000 audit: BPF prog-id=108 op=UNLOAD Jul 2 06:53:25.895000 audit: BPF prog-id=107 op=UNLOAD Jul 2 06:53:25.895000 audit: BPF prog-id=109 op=LOAD Jul 2 06:53:25.895000 audit[2524]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2443 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:25.895000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136356336643232313430306633353232343636323338616331373733 Jul 2 06:53:25.911687 containerd[1272]: time="2024-07-02T06:53:25.911641727Z" level=info msg="StartContainer for \"a65c6d221400f3522466238ac177382934c9d21432bca35a8631a43344dd78f9\" returns successfully" Jul 2 06:53:25.966000 audit[2578]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:25.966000 audit[2578]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdb7facff0 a2=0 a3=7ffdb7facfdc items=0 ppid=2535 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:25.966000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 06:53:25.966000 audit[2577]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2577 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:25.966000 audit[2577]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdbb4f86d0 a2=0 a3=7ffdbb4f86bc items=0 ppid=2535 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:25.966000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 2 06:53:25.967000 audit[2580]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2580 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:25.967000 audit[2580]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8aa1ab90 a2=0 a3=7ffc8aa1ab7c items=0 ppid=2535 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:25.967000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 06:53:25.967000 audit[2579]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=2579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:25.967000 audit[2579]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe343d5fc0 a2=0 a3=7646cffa0f588a20 items=0 ppid=2535 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:25.967000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 2 06:53:25.968000 audit[2581]: NETFILTER_CFG table=filter:42 family=10 entries=1 op=nft_register_chain pid=2581 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:25.968000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff7696ee10 a2=0 a3=7fff7696edfc items=0 ppid=2535 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:25.968000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 2 06:53:25.971000 audit[2582]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:25.971000 audit[2582]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc08f8c700 a2=0 a3=7ffc08f8c6ec items=0 ppid=2535 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:25.971000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 2 06:53:26.069000 audit[2583]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.069000 audit[2583]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd72f33530 a2=0 a3=7ffd72f3351c items=0 ppid=2535 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.069000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 2 06:53:26.072000 audit[2585]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.072000 audit[2585]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffd974b69e0 a2=0 a3=7ffd974b69cc items=0 ppid=2535 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.072000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 2 06:53:26.076000 audit[2588]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.076000 audit[2588]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcfd4fcbc0 a2=0 a3=7ffcfd4fcbac items=0 ppid=2535 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.076000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 2 06:53:26.077000 audit[2589]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2589 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.077000 audit[2589]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5b9cdb40 a2=0 a3=7ffe5b9cdb2c items=0 ppid=2535 pid=2589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.077000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 2 06:53:26.079000 audit[2591]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.079000 audit[2591]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeaa17e8f0 a2=0 a3=7ffeaa17e8dc items=0 ppid=2535 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.079000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 2 06:53:26.080000 audit[2592]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.080000 audit[2592]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd1f5fbc70 a2=0 a3=7ffd1f5fbc5c items=0 ppid=2535 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.080000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 2 06:53:26.083000 audit[2594]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2594 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.083000 audit[2594]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe3f341180 a2=0 a3=7ffe3f34116c items=0 ppid=2535 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.083000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 2 06:53:26.087000 audit[2597]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.087000 audit[2597]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffee1ec7010 a2=0 a3=7ffee1ec6ffc items=0 ppid=2535 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.087000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 2 06:53:26.088000 audit[2598]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2598 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.088000 audit[2598]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe97aebdc0 a2=0 a3=7ffe97aebdac items=0 ppid=2535 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.088000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 2 06:53:26.090000 audit[2600]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2600 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.090000 audit[2600]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc126f6650 a2=0 a3=7ffc126f663c items=0 ppid=2535 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.090000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 2 06:53:26.091000 audit[2601]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2601 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.091000 audit[2601]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc74ee9270 a2=0 a3=7ffc74ee925c items=0 ppid=2535 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.091000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 2 06:53:26.094000 audit[2603]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2603 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.094000 audit[2603]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe30edac30 a2=0 a3=7ffe30edac1c items=0 ppid=2535 pid=2603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.094000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 06:53:26.098000 audit[2606]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2606 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.098000 audit[2606]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffd5193820 a2=0 a3=7fffd519380c items=0 ppid=2535 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.098000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 06:53:26.102000 audit[2609]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2609 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.102000 audit[2609]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc0d6b6b90 a2=0 a3=7ffc0d6b6b7c items=0 ppid=2535 pid=2609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.102000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 2 06:53:26.103000 audit[2610]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2610 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.103000 audit[2610]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff7dc5a780 a2=0 a3=7fff7dc5a76c items=0 ppid=2535 pid=2610 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.103000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 2 06:53:26.105000 audit[2612]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2612 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.105000 audit[2612]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffdf16f2f70 a2=0 a3=7ffdf16f2f5c items=0 ppid=2535 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.105000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 06:53:26.108000 audit[2615]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2615 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.108000 audit[2615]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd2dc77dd0 a2=0 a3=7ffd2dc77dbc items=0 ppid=2535 pid=2615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.108000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 06:53:26.109000 audit[2616]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.109000 audit[2616]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffef3c2b010 a2=0 a3=7ffef3c2affc items=0 ppid=2535 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.109000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 2 06:53:26.112000 audit[2618]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2618 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 2 06:53:26.112000 audit[2618]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7fff9d0dbec0 a2=0 a3=7fff9d0dbeac items=0 ppid=2535 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.112000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 2 06:53:26.131000 audit[2624]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2624 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:53:26.131000 audit[2624]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff074c9df0 a2=0 a3=7fff074c9ddc items=0 ppid=2535 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.131000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:53:26.143000 audit[2624]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2624 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:53:26.143000 audit[2624]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7fff074c9df0 a2=0 a3=7fff074c9ddc items=0 ppid=2535 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.143000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:53:26.145000 audit[2630]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2630 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.145000 audit[2630]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffda1b228f0 a2=0 a3=7ffda1b228dc items=0 ppid=2535 pid=2630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.145000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 2 06:53:26.148000 audit[2632]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2632 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.148000 audit[2632]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc88991950 a2=0 a3=7ffc8899193c items=0 ppid=2535 pid=2632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.148000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 2 06:53:26.152000 audit[2635]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2635 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.152000 audit[2635]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc3e6b2c20 a2=0 a3=7ffc3e6b2c0c items=0 ppid=2535 pid=2635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.152000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 2 06:53:26.153000 audit[2636]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2636 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.153000 audit[2636]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf76a1200 a2=0 a3=7ffcf76a11ec items=0 ppid=2535 pid=2636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.153000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 2 06:53:26.155000 audit[2638]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2638 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.155000 audit[2638]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd9ad822c0 a2=0 a3=7ffd9ad822ac items=0 ppid=2535 pid=2638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.155000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 2 06:53:26.157000 audit[2639]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2639 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.157000 audit[2639]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc7fb56130 a2=0 a3=7ffc7fb5611c items=0 ppid=2535 pid=2639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.157000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 2 06:53:26.159000 audit[2641]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2641 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.159000 audit[2641]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe33ed5020 a2=0 a3=7ffe33ed500c items=0 ppid=2535 pid=2641 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.159000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 2 06:53:26.163000 audit[2644]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2644 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.163000 audit[2644]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7fff56c41ce0 a2=0 a3=7fff56c41ccc items=0 ppid=2535 pid=2644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.163000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 2 06:53:26.164000 audit[2645]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2645 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.164000 audit[2645]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee3d53290 a2=0 a3=7ffee3d5327c items=0 ppid=2535 pid=2645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.164000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 2 06:53:26.169000 audit[2647]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2647 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.169000 audit[2647]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe3e3ba0a0 a2=0 a3=7ffe3e3ba08c items=0 ppid=2535 pid=2647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.169000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 2 06:53:26.170000 audit[2648]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2648 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.170000 audit[2648]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffeab29ecb0 a2=0 a3=7ffeab29ec9c items=0 ppid=2535 pid=2648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.170000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 2 06:53:26.173000 audit[2650]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2650 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.173000 audit[2650]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffcfe9f0480 a2=0 a3=7ffcfe9f046c items=0 ppid=2535 pid=2650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.173000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 2 06:53:26.176000 audit[2653]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2653 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.176000 audit[2653]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff4f79dec0 a2=0 a3=7fff4f79deac items=0 ppid=2535 pid=2653 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.176000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 2 06:53:26.180000 audit[2656]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2656 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.180000 audit[2656]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffcb01a600 a2=0 a3=7fffcb01a5ec items=0 ppid=2535 pid=2656 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.180000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 2 06:53:26.181000 audit[2657]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2657 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.181000 audit[2657]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe771350a0 a2=0 a3=7ffe7713508c items=0 ppid=2535 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.181000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 2 06:53:26.183000 audit[2659]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2659 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.183000 audit[2659]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd4791baf0 a2=0 a3=7ffd4791badc items=0 ppid=2535 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.183000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 06:53:26.186000 audit[2662]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2662 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.186000 audit[2662]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fffa8690c70 a2=0 a3=7fffa8690c5c items=0 ppid=2535 pid=2662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 2 06:53:26.187000 audit[2663]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2663 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.187000 audit[2663]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd855bdea0 a2=0 a3=7ffd855bde8c items=0 ppid=2535 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.187000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 2 06:53:26.189000 audit[2665]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2665 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.189000 audit[2665]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffefc9bcdc0 a2=0 a3=7ffefc9bcdac items=0 ppid=2535 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.189000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 2 06:53:26.191000 audit[2666]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2666 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.191000 audit[2666]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf0f04140 a2=0 a3=7ffcf0f0412c items=0 ppid=2535 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.191000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 2 06:53:26.193000 audit[2668]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2668 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.193000 audit[2668]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc7f773b60 a2=0 a3=7ffc7f773b4c items=0 ppid=2535 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 06:53:26.196000 audit[2671]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2671 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 2 06:53:26.196000 audit[2671]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffecc8d9e00 a2=0 a3=7ffecc8d9dec items=0 ppid=2535 pid=2671 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.196000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 2 06:53:26.199000 audit[2673]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2673 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 2 06:53:26.199000 audit[2673]: SYSCALL arch=c000003e syscall=46 success=yes exit=2004 a0=3 a1=7ffde07c3570 a2=0 a3=7ffde07c355c items=0 ppid=2535 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.199000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:53:26.199000 audit[2673]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2673 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 2 06:53:26.199000 audit[2673]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffde07c3570 a2=0 a3=7ffde07c355c items=0 ppid=2535 pid=2673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:26.199000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:53:26.247497 kubelet[2344]: E0702 06:53:26.247166 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:26.253675 kubelet[2344]: I0702 06:53:26.253630 2344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-slq97" podStartSLOduration=1.2535699249999999 podStartE2EDuration="1.253569925s" podCreationTimestamp="2024-07-02 06:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:53:26.253480246 +0000 UTC m=+14.125281478" watchObservedRunningTime="2024-07-02 06:53:26.253569925 +0000 UTC m=+14.125371147" Jul 2 06:53:26.545394 systemd[1]: run-containerd-runc-k8s.io-58d0ae11e8ddcb8d1d8c09b89a7aa65f6aa62a060a14e5ce0af8778e3a8a621f-runc.PmliXE.mount: Deactivated successfully. Jul 2 06:53:27.344403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount444502222.mount: Deactivated successfully. Jul 2 06:53:28.471391 containerd[1272]: time="2024-07-02T06:53:28.471326299Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:28.506838 containerd[1272]: time="2024-07-02T06:53:28.506738397Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076100" Jul 2 06:53:28.523043 containerd[1272]: time="2024-07-02T06:53:28.522971401Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:28.533384 containerd[1272]: time="2024-07-02T06:53:28.533356046Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:28.535587 containerd[1272]: time="2024-07-02T06:53:28.535548963Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:28.536352 containerd[1272]: time="2024-07-02T06:53:28.536160872Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.670060807s" Jul 2 06:53:28.536352 containerd[1272]: time="2024-07-02T06:53:28.536217408Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Jul 2 06:53:28.538423 containerd[1272]: time="2024-07-02T06:53:28.537908472Z" level=info msg="CreateContainer within sandbox \"ed990e327296b4255bfcdf94e2f94bf2bfef93183fa0dd0edfa2e754f33c77c2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 2 06:53:28.555629 containerd[1272]: time="2024-07-02T06:53:28.555577452Z" level=info msg="CreateContainer within sandbox \"ed990e327296b4255bfcdf94e2f94bf2bfef93183fa0dd0edfa2e754f33c77c2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"582e788373477ffae52e1436fb861d9dd8a66676a917496bc10f763c8742a6a5\"" Jul 2 06:53:28.556324 containerd[1272]: time="2024-07-02T06:53:28.556209558Z" level=info msg="StartContainer for \"582e788373477ffae52e1436fb861d9dd8a66676a917496bc10f763c8742a6a5\"" Jul 2 06:53:28.578922 systemd[1]: Started cri-containerd-582e788373477ffae52e1436fb861d9dd8a66676a917496bc10f763c8742a6a5.scope - libcontainer container 582e788373477ffae52e1436fb861d9dd8a66676a917496bc10f763c8742a6a5. Jul 2 06:53:28.587000 audit: BPF prog-id=110 op=LOAD Jul 2 06:53:28.587000 audit: BPF prog-id=111 op=LOAD Jul 2 06:53:28.587000 audit[2690]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2472 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:28.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538326537383833373334373766666165353265313433366662383631 Jul 2 06:53:28.587000 audit: BPF prog-id=112 op=LOAD Jul 2 06:53:28.587000 audit[2690]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2472 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:28.587000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538326537383833373334373766666165353265313433366662383631 Jul 2 06:53:28.587000 audit: BPF prog-id=112 op=UNLOAD Jul 2 06:53:28.588000 audit: BPF prog-id=111 op=UNLOAD Jul 2 06:53:28.588000 audit: BPF prog-id=113 op=LOAD Jul 2 06:53:28.588000 audit[2690]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2472 pid=2690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:28.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3538326537383833373334373766666165353265313433366662383631 Jul 2 06:53:28.600488 containerd[1272]: time="2024-07-02T06:53:28.600430734Z" level=info msg="StartContainer for \"582e788373477ffae52e1436fb861d9dd8a66676a917496bc10f763c8742a6a5\" returns successfully" Jul 2 06:53:29.262742 kubelet[2344]: I0702 06:53:29.262601 2344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-wdrth" podStartSLOduration=1.591321195 podStartE2EDuration="4.26237717s" podCreationTimestamp="2024-07-02 06:53:25 +0000 UTC" firstStartedPulling="2024-07-02 06:53:25.865427431 +0000 UTC m=+13.737228663" lastFinishedPulling="2024-07-02 06:53:28.536483406 +0000 UTC m=+16.408284638" observedRunningTime="2024-07-02 06:53:29.262102575 +0000 UTC m=+17.133903817" watchObservedRunningTime="2024-07-02 06:53:29.26237717 +0000 UTC m=+17.134178402" Jul 2 06:53:31.419000 audit[2723]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2723 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:53:31.420952 kernel: kauditd_printk_skb: 202 callbacks suppressed Jul 2 06:53:31.421011 kernel: audit: type=1325 audit(1719903211.419:462): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2723 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:53:31.419000 audit[2723]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd9b2ce0c0 a2=0 a3=7ffd9b2ce0ac items=0 ppid=2535 pid=2723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:31.426375 kernel: audit: type=1300 audit(1719903211.419:462): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffd9b2ce0c0 a2=0 a3=7ffd9b2ce0ac items=0 ppid=2535 pid=2723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:31.426418 kernel: audit: type=1327 audit(1719903211.419:462): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:53:31.419000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:53:31.419000 audit[2723]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2723 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:53:31.419000 audit[2723]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd9b2ce0c0 a2=0 a3=0 items=0 ppid=2535 pid=2723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:31.436069 kernel: audit: type=1325 audit(1719903211.419:463): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2723 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:53:31.436144 kernel: audit: type=1300 audit(1719903211.419:463): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd9b2ce0c0 a2=0 a3=0 items=0 ppid=2535 pid=2723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:31.436174 kernel: audit: type=1327 audit(1719903211.419:463): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:53:31.419000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:53:31.433000 audit[2725]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2725 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:53:31.433000 audit[2725]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe022c35e0 a2=0 a3=7ffe022c35cc items=0 ppid=2535 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:31.445326 kernel: audit: type=1325 audit(1719903211.433:464): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2725 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:53:31.445389 kernel: audit: type=1300 audit(1719903211.433:464): arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe022c35e0 a2=0 a3=7ffe022c35cc items=0 ppid=2535 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:31.445425 kernel: audit: type=1327 audit(1719903211.433:464): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:53:31.433000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:53:31.438000 audit[2725]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2725 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:53:31.438000 audit[2725]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe022c35e0 a2=0 a3=0 items=0 ppid=2535 pid=2725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:31.438000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:53:31.455799 kernel: audit: type=1325 audit(1719903211.438:465): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2725 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:53:31.553382 kubelet[2344]: I0702 06:53:31.553325 2344 topology_manager.go:215] "Topology Admit Handler" podUID="3a5bc74d-bd30-4ab6-8afa-6868be5ff09a" podNamespace="calico-system" podName="calico-typha-9d6bdd58d-w9nmb" Jul 2 06:53:31.564020 systemd[1]: Created slice kubepods-besteffort-pod3a5bc74d_bd30_4ab6_8afa_6868be5ff09a.slice - libcontainer container kubepods-besteffort-pod3a5bc74d_bd30_4ab6_8afa_6868be5ff09a.slice. Jul 2 06:53:31.600575 kubelet[2344]: I0702 06:53:31.600525 2344 topology_manager.go:215] "Topology Admit Handler" podUID="1df82370-ccd9-475f-baeb-01c95268358d" podNamespace="calico-system" podName="calico-node-gr96q" Jul 2 06:53:31.607529 systemd[1]: Created slice kubepods-besteffort-pod1df82370_ccd9_475f_baeb_01c95268358d.slice - libcontainer container kubepods-besteffort-pod1df82370_ccd9_475f_baeb_01c95268358d.slice. Jul 2 06:53:31.634258 kubelet[2344]: I0702 06:53:31.634187 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a5bc74d-bd30-4ab6-8afa-6868be5ff09a-tigera-ca-bundle\") pod \"calico-typha-9d6bdd58d-w9nmb\" (UID: \"3a5bc74d-bd30-4ab6-8afa-6868be5ff09a\") " pod="calico-system/calico-typha-9d6bdd58d-w9nmb" Jul 2 06:53:31.634258 kubelet[2344]: I0702 06:53:31.634242 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kdn7\" (UniqueName: \"kubernetes.io/projected/3a5bc74d-bd30-4ab6-8afa-6868be5ff09a-kube-api-access-8kdn7\") pod \"calico-typha-9d6bdd58d-w9nmb\" (UID: \"3a5bc74d-bd30-4ab6-8afa-6868be5ff09a\") " pod="calico-system/calico-typha-9d6bdd58d-w9nmb" Jul 2 06:53:31.634258 kubelet[2344]: I0702 06:53:31.634266 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-policysync\") pod \"calico-node-gr96q\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " pod="calico-system/calico-node-gr96q" Jul 2 06:53:31.634574 kubelet[2344]: I0702 06:53:31.634419 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-flexvol-driver-host\") pod \"calico-node-gr96q\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " pod="calico-system/calico-node-gr96q" Jul 2 06:53:31.634574 kubelet[2344]: I0702 06:53:31.634518 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbbwh\" (UniqueName: \"kubernetes.io/projected/1df82370-ccd9-475f-baeb-01c95268358d-kube-api-access-cbbwh\") pod \"calico-node-gr96q\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " pod="calico-system/calico-node-gr96q" Jul 2 06:53:31.634630 kubelet[2344]: I0702 06:53:31.634578 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1df82370-ccd9-475f-baeb-01c95268358d-node-certs\") pod \"calico-node-gr96q\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " pod="calico-system/calico-node-gr96q" Jul 2 06:53:31.634696 kubelet[2344]: I0702 06:53:31.634675 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-lib-modules\") pod \"calico-node-gr96q\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " pod="calico-system/calico-node-gr96q" Jul 2 06:53:31.634729 kubelet[2344]: I0702 06:53:31.634719 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-cni-net-dir\") pod \"calico-node-gr96q\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " pod="calico-system/calico-node-gr96q" Jul 2 06:53:31.634769 kubelet[2344]: I0702 06:53:31.634761 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-var-run-calico\") pod \"calico-node-gr96q\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " pod="calico-system/calico-node-gr96q" Jul 2 06:53:31.634869 kubelet[2344]: I0702 06:53:31.634841 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1df82370-ccd9-475f-baeb-01c95268358d-tigera-ca-bundle\") pod \"calico-node-gr96q\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " pod="calico-system/calico-node-gr96q" Jul 2 06:53:31.634924 kubelet[2344]: I0702 06:53:31.634908 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-xtables-lock\") pod \"calico-node-gr96q\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " pod="calico-system/calico-node-gr96q" Jul 2 06:53:31.634957 kubelet[2344]: I0702 06:53:31.634941 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-cni-bin-dir\") pod \"calico-node-gr96q\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " pod="calico-system/calico-node-gr96q" Jul 2 06:53:31.634988 kubelet[2344]: I0702 06:53:31.634969 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3a5bc74d-bd30-4ab6-8afa-6868be5ff09a-typha-certs\") pod \"calico-typha-9d6bdd58d-w9nmb\" (UID: \"3a5bc74d-bd30-4ab6-8afa-6868be5ff09a\") " pod="calico-system/calico-typha-9d6bdd58d-w9nmb" Jul 2 06:53:31.635011 kubelet[2344]: I0702 06:53:31.634996 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-var-lib-calico\") pod \"calico-node-gr96q\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " pod="calico-system/calico-node-gr96q" Jul 2 06:53:31.635039 kubelet[2344]: I0702 06:53:31.635023 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-cni-log-dir\") pod \"calico-node-gr96q\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " pod="calico-system/calico-node-gr96q" Jul 2 06:53:31.709089 kubelet[2344]: I0702 06:53:31.708932 2344 topology_manager.go:215] "Topology Admit Handler" podUID="f47c1652-2b34-4c56-adf0-effec8bb0963" podNamespace="calico-system" podName="csi-node-driver-bdm7p" Jul 2 06:53:31.709270 kubelet[2344]: E0702 06:53:31.709229 2344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bdm7p" podUID="f47c1652-2b34-4c56-adf0-effec8bb0963" Jul 2 06:53:31.735998 kubelet[2344]: I0702 06:53:31.735952 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f47c1652-2b34-4c56-adf0-effec8bb0963-socket-dir\") pod \"csi-node-driver-bdm7p\" (UID: \"f47c1652-2b34-4c56-adf0-effec8bb0963\") " pod="calico-system/csi-node-driver-bdm7p" Jul 2 06:53:31.736372 kubelet[2344]: I0702 06:53:31.736355 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f47c1652-2b34-4c56-adf0-effec8bb0963-registration-dir\") pod \"csi-node-driver-bdm7p\" (UID: \"f47c1652-2b34-4c56-adf0-effec8bb0963\") " pod="calico-system/csi-node-driver-bdm7p" Jul 2 06:53:31.736612 kubelet[2344]: I0702 06:53:31.736596 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hxt4\" (UniqueName: \"kubernetes.io/projected/f47c1652-2b34-4c56-adf0-effec8bb0963-kube-api-access-9hxt4\") pod \"csi-node-driver-bdm7p\" (UID: \"f47c1652-2b34-4c56-adf0-effec8bb0963\") " pod="calico-system/csi-node-driver-bdm7p" Jul 2 06:53:31.737255 kubelet[2344]: I0702 06:53:31.737236 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f47c1652-2b34-4c56-adf0-effec8bb0963-varrun\") pod \"csi-node-driver-bdm7p\" (UID: \"f47c1652-2b34-4c56-adf0-effec8bb0963\") " pod="calico-system/csi-node-driver-bdm7p" Jul 2 06:53:31.737418 kubelet[2344]: I0702 06:53:31.737401 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f47c1652-2b34-4c56-adf0-effec8bb0963-kubelet-dir\") pod \"csi-node-driver-bdm7p\" (UID: \"f47c1652-2b34-4c56-adf0-effec8bb0963\") " pod="calico-system/csi-node-driver-bdm7p" Jul 2 06:53:31.737720 kubelet[2344]: E0702 06:53:31.737694 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.737802 kubelet[2344]: W0702 06:53:31.737718 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.737802 kubelet[2344]: E0702 06:53:31.737748 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.738016 kubelet[2344]: E0702 06:53:31.737995 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.738016 kubelet[2344]: W0702 06:53:31.738010 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.738114 kubelet[2344]: E0702 06:53:31.738031 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.738276 kubelet[2344]: E0702 06:53:31.738253 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.738276 kubelet[2344]: W0702 06:53:31.738271 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.738373 kubelet[2344]: E0702 06:53:31.738285 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.738532 kubelet[2344]: E0702 06:53:31.738512 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.738532 kubelet[2344]: W0702 06:53:31.738528 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.738628 kubelet[2344]: E0702 06:53:31.738545 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.739824 kubelet[2344]: E0702 06:53:31.739180 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.739824 kubelet[2344]: W0702 06:53:31.739221 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.739824 kubelet[2344]: E0702 06:53:31.739253 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.739824 kubelet[2344]: E0702 06:53:31.739486 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.739824 kubelet[2344]: W0702 06:53:31.739496 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.739824 kubelet[2344]: E0702 06:53:31.739576 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.739824 kubelet[2344]: E0702 06:53:31.739689 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.739824 kubelet[2344]: W0702 06:53:31.739696 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.739824 kubelet[2344]: E0702 06:53:31.739758 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.742340 kubelet[2344]: E0702 06:53:31.742237 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.742340 kubelet[2344]: W0702 06:53:31.742264 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.742340 kubelet[2344]: E0702 06:53:31.742357 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.745881 kubelet[2344]: E0702 06:53:31.745831 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.745881 kubelet[2344]: W0702 06:53:31.745879 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.746083 kubelet[2344]: E0702 06:53:31.746032 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.752909 kubelet[2344]: E0702 06:53:31.752359 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.752909 kubelet[2344]: W0702 06:53:31.752390 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.752909 kubelet[2344]: E0702 06:53:31.752461 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.752909 kubelet[2344]: E0702 06:53:31.752698 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.752909 kubelet[2344]: W0702 06:53:31.752706 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.752909 kubelet[2344]: E0702 06:53:31.752822 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.753172 kubelet[2344]: E0702 06:53:31.752933 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.753172 kubelet[2344]: W0702 06:53:31.752940 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.753172 kubelet[2344]: E0702 06:53:31.753005 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.755224 kubelet[2344]: E0702 06:53:31.755145 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.755224 kubelet[2344]: W0702 06:53:31.755159 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.755224 kubelet[2344]: E0702 06:53:31.755199 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.755444 kubelet[2344]: E0702 06:53:31.755422 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.755444 kubelet[2344]: W0702 06:53:31.755430 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.755521 kubelet[2344]: E0702 06:53:31.755515 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.755646 kubelet[2344]: E0702 06:53:31.755624 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.755646 kubelet[2344]: W0702 06:53:31.755638 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.755726 kubelet[2344]: E0702 06:53:31.755715 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.755845 kubelet[2344]: E0702 06:53:31.755824 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.755845 kubelet[2344]: W0702 06:53:31.755837 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.755911 kubelet[2344]: E0702 06:53:31.755901 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.756287 kubelet[2344]: E0702 06:53:31.756265 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.756287 kubelet[2344]: W0702 06:53:31.756280 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.756375 kubelet[2344]: E0702 06:53:31.756316 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.756573 kubelet[2344]: E0702 06:53:31.756553 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.756627 kubelet[2344]: W0702 06:53:31.756584 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.756627 kubelet[2344]: E0702 06:53:31.756600 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.757393 kubelet[2344]: E0702 06:53:31.757352 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.757393 kubelet[2344]: W0702 06:53:31.757382 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.757489 kubelet[2344]: E0702 06:53:31.757433 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.757844 kubelet[2344]: E0702 06:53:31.757820 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.757844 kubelet[2344]: W0702 06:53:31.757838 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.757928 kubelet[2344]: E0702 06:53:31.757915 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.758385 kubelet[2344]: E0702 06:53:31.758358 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.758385 kubelet[2344]: W0702 06:53:31.758375 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.758497 kubelet[2344]: E0702 06:53:31.758463 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.759446 kubelet[2344]: E0702 06:53:31.759417 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.759446 kubelet[2344]: W0702 06:53:31.759436 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.759548 kubelet[2344]: E0702 06:53:31.759533 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.759690 kubelet[2344]: E0702 06:53:31.759668 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.759690 kubelet[2344]: W0702 06:53:31.759684 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.759826 kubelet[2344]: E0702 06:53:31.759745 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.759850 kubelet[2344]: E0702 06:53:31.759844 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.759877 kubelet[2344]: W0702 06:53:31.759851 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.759922 kubelet[2344]: E0702 06:53:31.759913 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.760033 kubelet[2344]: E0702 06:53:31.760011 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.760033 kubelet[2344]: W0702 06:53:31.760026 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.760108 kubelet[2344]: E0702 06:53:31.760087 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.760208 kubelet[2344]: E0702 06:53:31.760189 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.760208 kubelet[2344]: W0702 06:53:31.760202 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.760280 kubelet[2344]: E0702 06:53:31.760263 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.760403 kubelet[2344]: E0702 06:53:31.760383 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.760403 kubelet[2344]: W0702 06:53:31.760396 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.760403 kubelet[2344]: E0702 06:53:31.760407 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.760649 kubelet[2344]: E0702 06:53:31.760626 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.760649 kubelet[2344]: W0702 06:53:31.760640 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.760649 kubelet[2344]: E0702 06:53:31.760652 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.762997 kubelet[2344]: E0702 06:53:31.762968 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.763133 kubelet[2344]: W0702 06:53:31.762986 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.763133 kubelet[2344]: E0702 06:53:31.763029 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.763308 kubelet[2344]: E0702 06:53:31.763277 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.763308 kubelet[2344]: W0702 06:53:31.763296 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.763404 kubelet[2344]: E0702 06:53:31.763382 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.764131 kubelet[2344]: E0702 06:53:31.764102 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.764200 kubelet[2344]: W0702 06:53:31.764145 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.764250 kubelet[2344]: E0702 06:53:31.764229 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.764972 kubelet[2344]: E0702 06:53:31.764953 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.765042 kubelet[2344]: W0702 06:53:31.765027 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.765231 kubelet[2344]: E0702 06:53:31.765206 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.765394 kubelet[2344]: E0702 06:53:31.765384 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.765453 kubelet[2344]: W0702 06:53:31.765443 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.765670 kubelet[2344]: E0702 06:53:31.765661 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.765755 kubelet[2344]: W0702 06:53:31.765736 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.765881 kubelet[2344]: E0702 06:53:31.765855 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.765943 kubelet[2344]: E0702 06:53:31.765887 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.766170 kubelet[2344]: E0702 06:53:31.766160 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.766241 kubelet[2344]: W0702 06:53:31.766231 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.766309 kubelet[2344]: E0702 06:53:31.766300 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.768541 kubelet[2344]: E0702 06:53:31.768420 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.768541 kubelet[2344]: W0702 06:53:31.768441 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.768541 kubelet[2344]: E0702 06:53:31.768461 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.778817 kubelet[2344]: E0702 06:53:31.778747 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.778817 kubelet[2344]: W0702 06:53:31.778790 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.778817 kubelet[2344]: E0702 06:53:31.778811 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.838371 kubelet[2344]: E0702 06:53:31.838335 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.838371 kubelet[2344]: W0702 06:53:31.838356 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.838371 kubelet[2344]: E0702 06:53:31.838376 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.838696 kubelet[2344]: E0702 06:53:31.838669 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.838696 kubelet[2344]: W0702 06:53:31.838683 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.838696 kubelet[2344]: E0702 06:53:31.838700 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.839146 kubelet[2344]: E0702 06:53:31.839117 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.839146 kubelet[2344]: W0702 06:53:31.839143 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.839271 kubelet[2344]: E0702 06:53:31.839180 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.839501 kubelet[2344]: E0702 06:53:31.839463 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.839501 kubelet[2344]: W0702 06:53:31.839491 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.839577 kubelet[2344]: E0702 06:53:31.839510 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.839729 kubelet[2344]: E0702 06:53:31.839705 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.839729 kubelet[2344]: W0702 06:53:31.839724 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.839837 kubelet[2344]: E0702 06:53:31.839744 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.839995 kubelet[2344]: E0702 06:53:31.839976 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.839995 kubelet[2344]: W0702 06:53:31.839992 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.840110 kubelet[2344]: E0702 06:53:31.840017 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.840299 kubelet[2344]: E0702 06:53:31.840273 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.840299 kubelet[2344]: W0702 06:53:31.840287 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.840508 kubelet[2344]: E0702 06:53:31.840453 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.840508 kubelet[2344]: W0702 06:53:31.840462 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.840602 kubelet[2344]: E0702 06:53:31.840589 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.840682 kubelet[2344]: E0702 06:53:31.840604 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.840883 kubelet[2344]: E0702 06:53:31.840858 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.840883 kubelet[2344]: W0702 06:53:31.840871 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.840960 kubelet[2344]: E0702 06:53:31.840901 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.841106 kubelet[2344]: E0702 06:53:31.841084 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.841106 kubelet[2344]: W0702 06:53:31.841102 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.841174 kubelet[2344]: E0702 06:53:31.841124 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.841677 kubelet[2344]: E0702 06:53:31.841653 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.841677 kubelet[2344]: W0702 06:53:31.841673 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.841762 kubelet[2344]: E0702 06:53:31.841687 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.841896 kubelet[2344]: E0702 06:53:31.841875 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.841896 kubelet[2344]: W0702 06:53:31.841892 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.841984 kubelet[2344]: E0702 06:53:31.841941 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.842071 kubelet[2344]: E0702 06:53:31.842053 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.842071 kubelet[2344]: W0702 06:53:31.842069 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.842160 kubelet[2344]: E0702 06:53:31.842134 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.842232 kubelet[2344]: E0702 06:53:31.842213 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.842232 kubelet[2344]: W0702 06:53:31.842229 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.842297 kubelet[2344]: E0702 06:53:31.842267 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.842391 kubelet[2344]: E0702 06:53:31.842373 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.842391 kubelet[2344]: W0702 06:53:31.842388 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.842494 kubelet[2344]: E0702 06:53:31.842471 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.842571 kubelet[2344]: E0702 06:53:31.842550 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.842571 kubelet[2344]: W0702 06:53:31.842568 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.842621 kubelet[2344]: E0702 06:53:31.842587 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.842866 kubelet[2344]: E0702 06:53:31.842756 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.842866 kubelet[2344]: W0702 06:53:31.842771 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.842866 kubelet[2344]: E0702 06:53:31.842809 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.843012 kubelet[2344]: E0702 06:53:31.842993 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.843012 kubelet[2344]: W0702 06:53:31.843009 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.843067 kubelet[2344]: E0702 06:53:31.843028 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.843173 kubelet[2344]: E0702 06:53:31.843163 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:31.844275 containerd[1272]: time="2024-07-02T06:53:31.843849757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gr96q,Uid:1df82370-ccd9-475f-baeb-01c95268358d,Namespace:calico-system,Attempt:0,}" Jul 2 06:53:31.844639 kubelet[2344]: E0702 06:53:31.843200 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.844739 kubelet[2344]: W0702 06:53:31.844711 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.844739 kubelet[2344]: E0702 06:53:31.844740 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.845141 kubelet[2344]: E0702 06:53:31.845092 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.845141 kubelet[2344]: W0702 06:53:31.845105 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.845141 kubelet[2344]: E0702 06:53:31.845121 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.845312 kubelet[2344]: E0702 06:53:31.845292 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.845312 kubelet[2344]: W0702 06:53:31.845306 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.845406 kubelet[2344]: E0702 06:53:31.845363 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.846260 kubelet[2344]: E0702 06:53:31.846245 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.846344 kubelet[2344]: W0702 06:53:31.846333 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.846556 kubelet[2344]: E0702 06:53:31.846530 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.846667 kubelet[2344]: E0702 06:53:31.846656 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.846751 kubelet[2344]: W0702 06:53:31.846724 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.846751 kubelet[2344]: E0702 06:53:31.846747 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.847199 kubelet[2344]: E0702 06:53:31.847013 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.847199 kubelet[2344]: W0702 06:53:31.847026 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.847199 kubelet[2344]: E0702 06:53:31.847039 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.847533 kubelet[2344]: E0702 06:53:31.847516 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.847533 kubelet[2344]: W0702 06:53:31.847529 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.847641 kubelet[2344]: E0702 06:53:31.847543 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.853482 kubelet[2344]: E0702 06:53:31.853442 2344 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 06:53:31.853482 kubelet[2344]: W0702 06:53:31.853463 2344 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 06:53:31.853482 kubelet[2344]: E0702 06:53:31.853487 2344 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 06:53:31.867066 containerd[1272]: time="2024-07-02T06:53:31.866960385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:53:31.867307 containerd[1272]: time="2024-07-02T06:53:31.867135884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:53:31.867307 containerd[1272]: time="2024-07-02T06:53:31.867222406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:53:31.867307 containerd[1272]: time="2024-07-02T06:53:31.867243265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:53:31.871084 kubelet[2344]: E0702 06:53:31.870716 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:31.873056 containerd[1272]: time="2024-07-02T06:53:31.873014669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9d6bdd58d-w9nmb,Uid:3a5bc74d-bd30-4ab6-8afa-6868be5ff09a,Namespace:calico-system,Attempt:0,}" Jul 2 06:53:31.890064 systemd[1]: Started cri-containerd-42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71.scope - libcontainer container 42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71. Jul 2 06:53:31.903000 audit: BPF prog-id=114 op=LOAD Jul 2 06:53:31.903000 audit: BPF prog-id=115 op=LOAD Jul 2 06:53:31.903000 audit[2812]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=2802 pid=2812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:31.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432663663616564373261646165633733626463376566376332306363 Jul 2 06:53:31.904000 audit: BPF prog-id=116 op=LOAD Jul 2 06:53:31.904000 audit[2812]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=2802 pid=2812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:31.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432663663616564373261646165633733626463376566376332306363 Jul 2 06:53:31.904000 audit: BPF prog-id=116 op=UNLOAD Jul 2 06:53:31.904000 audit: BPF prog-id=115 op=UNLOAD Jul 2 06:53:31.904000 audit: BPF prog-id=117 op=LOAD Jul 2 06:53:31.904000 audit[2812]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=2802 pid=2812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:31.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432663663616564373261646165633733626463376566376332306363 Jul 2 06:53:31.920800 containerd[1272]: time="2024-07-02T06:53:31.920743081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gr96q,Uid:1df82370-ccd9-475f-baeb-01c95268358d,Namespace:calico-system,Attempt:0,} returns sandbox id \"42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71\"" Jul 2 06:53:31.922521 kubelet[2344]: E0702 06:53:31.922432 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:31.924770 containerd[1272]: time="2024-07-02T06:53:31.924730336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 06:53:32.180699 containerd[1272]: time="2024-07-02T06:53:32.180113938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:53:32.180699 containerd[1272]: time="2024-07-02T06:53:32.180178108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:53:32.180699 containerd[1272]: time="2024-07-02T06:53:32.180206621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:53:32.180699 containerd[1272]: time="2024-07-02T06:53:32.180219485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:53:32.198092 systemd[1]: Started cri-containerd-b12af561a3204e01666f11269490a75e57630fbec7bbec5c34068325f0d1e7f7.scope - libcontainer container b12af561a3204e01666f11269490a75e57630fbec7bbec5c34068325f0d1e7f7. Jul 2 06:53:32.207000 audit: BPF prog-id=118 op=LOAD Jul 2 06:53:32.207000 audit: BPF prog-id=119 op=LOAD Jul 2 06:53:32.207000 audit[2854]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2842 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:32.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231326166353631613332303465303136363666313132363934393061 Jul 2 06:53:32.207000 audit: BPF prog-id=120 op=LOAD Jul 2 06:53:32.207000 audit[2854]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2842 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:32.207000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231326166353631613332303465303136363666313132363934393061 Jul 2 06:53:32.208000 audit: BPF prog-id=120 op=UNLOAD Jul 2 06:53:32.208000 audit: BPF prog-id=119 op=UNLOAD Jul 2 06:53:32.208000 audit: BPF prog-id=121 op=LOAD Jul 2 06:53:32.208000 audit[2854]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2842 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:32.208000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6231326166353631613332303465303136363666313132363934393061 Jul 2 06:53:32.235104 containerd[1272]: time="2024-07-02T06:53:32.235051166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9d6bdd58d-w9nmb,Uid:3a5bc74d-bd30-4ab6-8afa-6868be5ff09a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b12af561a3204e01666f11269490a75e57630fbec7bbec5c34068325f0d1e7f7\"" Jul 2 06:53:32.236050 kubelet[2344]: E0702 06:53:32.236018 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:32.457000 audit[2879]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=2879 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:53:32.457000 audit[2879]: SYSCALL arch=c000003e syscall=46 success=yes exit=5908 a0=3 a1=7ffe64c5e080 a2=0 a3=7ffe64c5e06c items=0 ppid=2535 pid=2879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:32.457000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:53:32.458000 audit[2879]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2879 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:53:32.458000 audit[2879]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe64c5e080 a2=0 a3=0 items=0 ppid=2535 pid=2879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:32.458000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:53:33.218980 kubelet[2344]: E0702 06:53:33.218905 2344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bdm7p" podUID="f47c1652-2b34-4c56-adf0-effec8bb0963" Jul 2 06:53:33.870490 containerd[1272]: time="2024-07-02T06:53:33.870402300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:33.871633 containerd[1272]: time="2024-07-02T06:53:33.871547849Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Jul 2 06:53:33.872809 containerd[1272]: time="2024-07-02T06:53:33.872754384Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:33.874561 containerd[1272]: time="2024-07-02T06:53:33.874533543Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:33.877138 containerd[1272]: time="2024-07-02T06:53:33.877069573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:33.878005 containerd[1272]: time="2024-07-02T06:53:33.877957278Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.953059288s" Jul 2 06:53:33.878060 containerd[1272]: time="2024-07-02T06:53:33.878006461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 06:53:33.878644 containerd[1272]: time="2024-07-02T06:53:33.878621906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jul 2 06:53:33.879844 containerd[1272]: time="2024-07-02T06:53:33.879810617Z" level=info msg="CreateContainer within sandbox \"42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 06:53:33.905732 containerd[1272]: time="2024-07-02T06:53:33.905641555Z" level=info msg="CreateContainer within sandbox \"42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0e8eb9d0d7538e0acd4407e8a9621ace97fb09eda91eeb5e6e89a1ae9df23241\"" Jul 2 06:53:33.906386 containerd[1272]: time="2024-07-02T06:53:33.906352529Z" level=info msg="StartContainer for \"0e8eb9d0d7538e0acd4407e8a9621ace97fb09eda91eeb5e6e89a1ae9df23241\"" Jul 2 06:53:33.943985 systemd[1]: Started cri-containerd-0e8eb9d0d7538e0acd4407e8a9621ace97fb09eda91eeb5e6e89a1ae9df23241.scope - libcontainer container 0e8eb9d0d7538e0acd4407e8a9621ace97fb09eda91eeb5e6e89a1ae9df23241. Jul 2 06:53:33.958000 audit: BPF prog-id=122 op=LOAD Jul 2 06:53:33.958000 audit[2891]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001b1988 a2=78 a3=0 items=0 ppid=2802 pid=2891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:33.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065386562396430643735333865306163643434303765386139363231 Jul 2 06:53:33.958000 audit: BPF prog-id=123 op=LOAD Jul 2 06:53:33.958000 audit[2891]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001b1720 a2=78 a3=0 items=0 ppid=2802 pid=2891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:33.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065386562396430643735333865306163643434303765386139363231 Jul 2 06:53:33.958000 audit: BPF prog-id=123 op=UNLOAD Jul 2 06:53:33.958000 audit: BPF prog-id=122 op=UNLOAD Jul 2 06:53:33.958000 audit: BPF prog-id=124 op=LOAD Jul 2 06:53:33.958000 audit[2891]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001b1be0 a2=78 a3=0 items=0 ppid=2802 pid=2891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:33.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3065386562396430643735333865306163643434303765386139363231 Jul 2 06:53:33.979301 containerd[1272]: time="2024-07-02T06:53:33.979243016Z" level=info msg="StartContainer for \"0e8eb9d0d7538e0acd4407e8a9621ace97fb09eda91eeb5e6e89a1ae9df23241\" returns successfully" Jul 2 06:53:33.983403 systemd[1]: cri-containerd-0e8eb9d0d7538e0acd4407e8a9621ace97fb09eda91eeb5e6e89a1ae9df23241.scope: Deactivated successfully. Jul 2 06:53:33.986000 audit: BPF prog-id=124 op=UNLOAD Jul 2 06:53:34.005923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e8eb9d0d7538e0acd4407e8a9621ace97fb09eda91eeb5e6e89a1ae9df23241-rootfs.mount: Deactivated successfully. Jul 2 06:53:34.029946 containerd[1272]: time="2024-07-02T06:53:34.029841454Z" level=info msg="shim disconnected" id=0e8eb9d0d7538e0acd4407e8a9621ace97fb09eda91eeb5e6e89a1ae9df23241 namespace=k8s.io Jul 2 06:53:34.029946 containerd[1272]: time="2024-07-02T06:53:34.029908900Z" level=warning msg="cleaning up after shim disconnected" id=0e8eb9d0d7538e0acd4407e8a9621ace97fb09eda91eeb5e6e89a1ae9df23241 namespace=k8s.io Jul 2 06:53:34.029946 containerd[1272]: time="2024-07-02T06:53:34.029919069Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:53:34.269881 containerd[1272]: time="2024-07-02T06:53:34.269727469Z" level=info msg="StopPodSandbox for \"42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71\"" Jul 2 06:53:34.269881 containerd[1272]: time="2024-07-02T06:53:34.269798362Z" level=info msg="Container to stop \"0e8eb9d0d7538e0acd4407e8a9621ace97fb09eda91eeb5e6e89a1ae9df23241\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 06:53:34.275537 systemd[1]: cri-containerd-42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71.scope: Deactivated successfully. Jul 2 06:53:34.274000 audit: BPF prog-id=114 op=UNLOAD Jul 2 06:53:34.280000 audit: BPF prog-id=117 op=UNLOAD Jul 2 06:53:34.373800 containerd[1272]: time="2024-07-02T06:53:34.373575287Z" level=info msg="shim disconnected" id=42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71 namespace=k8s.io Jul 2 06:53:34.373800 containerd[1272]: time="2024-07-02T06:53:34.373633736Z" level=warning msg="cleaning up after shim disconnected" id=42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71 namespace=k8s.io Jul 2 06:53:34.373800 containerd[1272]: time="2024-07-02T06:53:34.373644697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:53:34.386141 containerd[1272]: time="2024-07-02T06:53:34.386080094Z" level=info msg="TearDown network for sandbox \"42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71\" successfully" Jul 2 06:53:34.386141 containerd[1272]: time="2024-07-02T06:53:34.386131630Z" level=info msg="StopPodSandbox for \"42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71\" returns successfully" Jul 2 06:53:34.460449 kubelet[2344]: I0702 06:53:34.460387 2344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-cni-bin-dir\") pod \"1df82370-ccd9-475f-baeb-01c95268358d\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " Jul 2 06:53:34.460449 kubelet[2344]: I0702 06:53:34.460451 2344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-flexvol-driver-host\") pod \"1df82370-ccd9-475f-baeb-01c95268358d\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " Jul 2 06:53:34.461105 kubelet[2344]: I0702 06:53:34.460489 2344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-xtables-lock\") pod \"1df82370-ccd9-475f-baeb-01c95268358d\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " Jul 2 06:53:34.461105 kubelet[2344]: I0702 06:53:34.460509 2344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "1df82370-ccd9-475f-baeb-01c95268358d" (UID: "1df82370-ccd9-475f-baeb-01c95268358d"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:53:34.461105 kubelet[2344]: I0702 06:53:34.460523 2344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "1df82370-ccd9-475f-baeb-01c95268358d" (UID: "1df82370-ccd9-475f-baeb-01c95268358d"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:53:34.461105 kubelet[2344]: I0702 06:53:34.460532 2344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbbwh\" (UniqueName: \"kubernetes.io/projected/1df82370-ccd9-475f-baeb-01c95268358d-kube-api-access-cbbwh\") pod \"1df82370-ccd9-475f-baeb-01c95268358d\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " Jul 2 06:53:34.461105 kubelet[2344]: I0702 06:53:34.460584 2344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1df82370-ccd9-475f-baeb-01c95268358d" (UID: "1df82370-ccd9-475f-baeb-01c95268358d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:53:34.461275 kubelet[2344]: I0702 06:53:34.460589 2344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-lib-modules\") pod \"1df82370-ccd9-475f-baeb-01c95268358d\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " Jul 2 06:53:34.461275 kubelet[2344]: I0702 06:53:34.460604 2344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1df82370-ccd9-475f-baeb-01c95268358d" (UID: "1df82370-ccd9-475f-baeb-01c95268358d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:53:34.461275 kubelet[2344]: I0702 06:53:34.460621 2344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-var-run-calico\") pod \"1df82370-ccd9-475f-baeb-01c95268358d\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " Jul 2 06:53:34.461275 kubelet[2344]: I0702 06:53:34.460654 2344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1df82370-ccd9-475f-baeb-01c95268358d-tigera-ca-bundle\") pod \"1df82370-ccd9-475f-baeb-01c95268358d\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " Jul 2 06:53:34.461275 kubelet[2344]: I0702 06:53:34.460680 2344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-cni-log-dir\") pod \"1df82370-ccd9-475f-baeb-01c95268358d\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " Jul 2 06:53:34.461275 kubelet[2344]: I0702 06:53:34.460688 2344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "1df82370-ccd9-475f-baeb-01c95268358d" (UID: "1df82370-ccd9-475f-baeb-01c95268358d"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:53:34.461431 kubelet[2344]: I0702 06:53:34.460714 2344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-var-lib-calico\") pod \"1df82370-ccd9-475f-baeb-01c95268358d\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " Jul 2 06:53:34.461431 kubelet[2344]: I0702 06:53:34.460742 2344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-cni-net-dir\") pod \"1df82370-ccd9-475f-baeb-01c95268358d\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " Jul 2 06:53:34.461431 kubelet[2344]: I0702 06:53:34.460767 2344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-policysync\") pod \"1df82370-ccd9-475f-baeb-01c95268358d\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " Jul 2 06:53:34.461431 kubelet[2344]: I0702 06:53:34.460862 2344 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1df82370-ccd9-475f-baeb-01c95268358d-node-certs\") pod \"1df82370-ccd9-475f-baeb-01c95268358d\" (UID: \"1df82370-ccd9-475f-baeb-01c95268358d\") " Jul 2 06:53:34.461431 kubelet[2344]: I0702 06:53:34.460935 2344 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 2 06:53:34.461431 kubelet[2344]: I0702 06:53:34.460952 2344 reconciler_common.go:300] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-cni-bin-dir\") on node \"localhost\" DevicePath \"\"" Jul 2 06:53:34.461638 kubelet[2344]: I0702 06:53:34.460937 2344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "1df82370-ccd9-475f-baeb-01c95268358d" (UID: "1df82370-ccd9-475f-baeb-01c95268358d"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:53:34.461638 kubelet[2344]: I0702 06:53:34.460968 2344 reconciler_common.go:300] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-flexvol-driver-host\") on node \"localhost\" DevicePath \"\"" Jul 2 06:53:34.461638 kubelet[2344]: I0702 06:53:34.461002 2344 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 2 06:53:34.461638 kubelet[2344]: I0702 06:53:34.461018 2344 reconciler_common.go:300] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-var-run-calico\") on node \"localhost\" DevicePath \"\"" Jul 2 06:53:34.461638 kubelet[2344]: I0702 06:53:34.461043 2344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "1df82370-ccd9-475f-baeb-01c95268358d" (UID: "1df82370-ccd9-475f-baeb-01c95268358d"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:53:34.461638 kubelet[2344]: I0702 06:53:34.461068 2344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "1df82370-ccd9-475f-baeb-01c95268358d" (UID: "1df82370-ccd9-475f-baeb-01c95268358d"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:53:34.461846 kubelet[2344]: I0702 06:53:34.461100 2344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-policysync" (OuterVolumeSpecName: "policysync") pod "1df82370-ccd9-475f-baeb-01c95268358d" (UID: "1df82370-ccd9-475f-baeb-01c95268358d"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 06:53:34.461846 kubelet[2344]: I0702 06:53:34.461104 2344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1df82370-ccd9-475f-baeb-01c95268358d-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "1df82370-ccd9-475f-baeb-01c95268358d" (UID: "1df82370-ccd9-475f-baeb-01c95268358d"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 06:53:34.464121 kubelet[2344]: I0702 06:53:34.464080 2344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1df82370-ccd9-475f-baeb-01c95268358d-kube-api-access-cbbwh" (OuterVolumeSpecName: "kube-api-access-cbbwh") pod "1df82370-ccd9-475f-baeb-01c95268358d" (UID: "1df82370-ccd9-475f-baeb-01c95268358d"). InnerVolumeSpecName "kube-api-access-cbbwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 06:53:34.464239 kubelet[2344]: I0702 06:53:34.464160 2344 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1df82370-ccd9-475f-baeb-01c95268358d-node-certs" (OuterVolumeSpecName: "node-certs") pod "1df82370-ccd9-475f-baeb-01c95268358d" (UID: "1df82370-ccd9-475f-baeb-01c95268358d"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 06:53:34.561773 kubelet[2344]: I0702 06:53:34.561659 2344 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cbbwh\" (UniqueName: \"kubernetes.io/projected/1df82370-ccd9-475f-baeb-01c95268358d-kube-api-access-cbbwh\") on node \"localhost\" DevicePath \"\"" Jul 2 06:53:34.561773 kubelet[2344]: I0702 06:53:34.561714 2344 reconciler_common.go:300] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1df82370-ccd9-475f-baeb-01c95268358d-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 2 06:53:34.561773 kubelet[2344]: I0702 06:53:34.561728 2344 reconciler_common.go:300] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-cni-log-dir\") on node \"localhost\" DevicePath \"\"" Jul 2 06:53:34.561773 kubelet[2344]: I0702 06:53:34.561740 2344 reconciler_common.go:300] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-var-lib-calico\") on node \"localhost\" DevicePath \"\"" Jul 2 06:53:34.561773 kubelet[2344]: I0702 06:53:34.561751 2344 reconciler_common.go:300] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-policysync\") on node \"localhost\" DevicePath \"\"" Jul 2 06:53:34.561773 kubelet[2344]: I0702 06:53:34.561766 2344 reconciler_common.go:300] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/1df82370-ccd9-475f-baeb-01c95268358d-node-certs\") on node \"localhost\" DevicePath \"\"" Jul 2 06:53:34.561773 kubelet[2344]: I0702 06:53:34.561809 2344 reconciler_common.go:300] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/1df82370-ccd9-475f-baeb-01c95268358d-cni-net-dir\") on node \"localhost\" DevicePath \"\"" Jul 2 06:53:34.902112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71-rootfs.mount: Deactivated successfully. Jul 2 06:53:34.902224 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71-shm.mount: Deactivated successfully. Jul 2 06:53:34.902286 systemd[1]: var-lib-kubelet-pods-1df82370\x2dccd9\x2d475f\x2dbaeb\x2d01c95268358d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcbbwh.mount: Deactivated successfully. Jul 2 06:53:34.902343 systemd[1]: var-lib-kubelet-pods-1df82370\x2dccd9\x2d475f\x2dbaeb\x2d01c95268358d-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jul 2 06:53:35.218699 kubelet[2344]: E0702 06:53:35.218557 2344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bdm7p" podUID="f47c1652-2b34-4c56-adf0-effec8bb0963" Jul 2 06:53:35.275071 kubelet[2344]: I0702 06:53:35.275033 2344 scope.go:117] "RemoveContainer" containerID="0e8eb9d0d7538e0acd4407e8a9621ace97fb09eda91eeb5e6e89a1ae9df23241" Jul 2 06:53:35.277073 containerd[1272]: time="2024-07-02T06:53:35.277031115Z" level=info msg="RemoveContainer for \"0e8eb9d0d7538e0acd4407e8a9621ace97fb09eda91eeb5e6e89a1ae9df23241\"" Jul 2 06:53:35.279806 systemd[1]: Removed slice kubepods-besteffort-pod1df82370_ccd9_475f_baeb_01c95268358d.slice - libcontainer container kubepods-besteffort-pod1df82370_ccd9_475f_baeb_01c95268358d.slice. Jul 2 06:53:35.281934 containerd[1272]: time="2024-07-02T06:53:35.281874976Z" level=info msg="RemoveContainer for \"0e8eb9d0d7538e0acd4407e8a9621ace97fb09eda91eeb5e6e89a1ae9df23241\" returns successfully" Jul 2 06:53:35.306083 kubelet[2344]: I0702 06:53:35.306020 2344 topology_manager.go:215] "Topology Admit Handler" podUID="6ad1a4fc-e012-4ea2-baa2-db08b7b62df6" podNamespace="calico-system" podName="calico-node-9fw5t" Jul 2 06:53:35.306418 kubelet[2344]: E0702 06:53:35.306406 2344 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1df82370-ccd9-475f-baeb-01c95268358d" containerName="flexvol-driver" Jul 2 06:53:35.306576 kubelet[2344]: I0702 06:53:35.306564 2344 memory_manager.go:354] "RemoveStaleState removing state" podUID="1df82370-ccd9-475f-baeb-01c95268358d" containerName="flexvol-driver" Jul 2 06:53:35.317659 systemd[1]: Created slice kubepods-besteffort-pod6ad1a4fc_e012_4ea2_baa2_db08b7b62df6.slice - libcontainer container kubepods-besteffort-pod6ad1a4fc_e012_4ea2_baa2_db08b7b62df6.slice. Jul 2 06:53:35.369376 kubelet[2344]: I0702 06:53:35.369334 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6ad1a4fc-e012-4ea2-baa2-db08b7b62df6-cni-log-dir\") pod \"calico-node-9fw5t\" (UID: \"6ad1a4fc-e012-4ea2-baa2-db08b7b62df6\") " pod="calico-system/calico-node-9fw5t" Jul 2 06:53:35.369376 kubelet[2344]: I0702 06:53:35.369377 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6ad1a4fc-e012-4ea2-baa2-db08b7b62df6-tigera-ca-bundle\") pod \"calico-node-9fw5t\" (UID: \"6ad1a4fc-e012-4ea2-baa2-db08b7b62df6\") " pod="calico-system/calico-node-9fw5t" Jul 2 06:53:35.369611 kubelet[2344]: I0702 06:53:35.369405 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6ad1a4fc-e012-4ea2-baa2-db08b7b62df6-var-run-calico\") pod \"calico-node-9fw5t\" (UID: \"6ad1a4fc-e012-4ea2-baa2-db08b7b62df6\") " pod="calico-system/calico-node-9fw5t" Jul 2 06:53:35.369611 kubelet[2344]: I0702 06:53:35.369425 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g7qv\" (UniqueName: \"kubernetes.io/projected/6ad1a4fc-e012-4ea2-baa2-db08b7b62df6-kube-api-access-8g7qv\") pod \"calico-node-9fw5t\" (UID: \"6ad1a4fc-e012-4ea2-baa2-db08b7b62df6\") " pod="calico-system/calico-node-9fw5t" Jul 2 06:53:35.369611 kubelet[2344]: I0702 06:53:35.369530 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ad1a4fc-e012-4ea2-baa2-db08b7b62df6-xtables-lock\") pod \"calico-node-9fw5t\" (UID: \"6ad1a4fc-e012-4ea2-baa2-db08b7b62df6\") " pod="calico-system/calico-node-9fw5t" Jul 2 06:53:35.369611 kubelet[2344]: I0702 06:53:35.369548 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6ad1a4fc-e012-4ea2-baa2-db08b7b62df6-var-lib-calico\") pod \"calico-node-9fw5t\" (UID: \"6ad1a4fc-e012-4ea2-baa2-db08b7b62df6\") " pod="calico-system/calico-node-9fw5t" Jul 2 06:53:35.369611 kubelet[2344]: I0702 06:53:35.369566 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6ad1a4fc-e012-4ea2-baa2-db08b7b62df6-policysync\") pod \"calico-node-9fw5t\" (UID: \"6ad1a4fc-e012-4ea2-baa2-db08b7b62df6\") " pod="calico-system/calico-node-9fw5t" Jul 2 06:53:35.369743 kubelet[2344]: I0702 06:53:35.369583 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6ad1a4fc-e012-4ea2-baa2-db08b7b62df6-node-certs\") pod \"calico-node-9fw5t\" (UID: \"6ad1a4fc-e012-4ea2-baa2-db08b7b62df6\") " pod="calico-system/calico-node-9fw5t" Jul 2 06:53:35.369743 kubelet[2344]: I0702 06:53:35.369601 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6ad1a4fc-e012-4ea2-baa2-db08b7b62df6-cni-bin-dir\") pod \"calico-node-9fw5t\" (UID: \"6ad1a4fc-e012-4ea2-baa2-db08b7b62df6\") " pod="calico-system/calico-node-9fw5t" Jul 2 06:53:35.369743 kubelet[2344]: I0702 06:53:35.369617 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ad1a4fc-e012-4ea2-baa2-db08b7b62df6-lib-modules\") pod \"calico-node-9fw5t\" (UID: \"6ad1a4fc-e012-4ea2-baa2-db08b7b62df6\") " pod="calico-system/calico-node-9fw5t" Jul 2 06:53:35.369743 kubelet[2344]: I0702 06:53:35.369632 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6ad1a4fc-e012-4ea2-baa2-db08b7b62df6-cni-net-dir\") pod \"calico-node-9fw5t\" (UID: \"6ad1a4fc-e012-4ea2-baa2-db08b7b62df6\") " pod="calico-system/calico-node-9fw5t" Jul 2 06:53:35.369743 kubelet[2344]: I0702 06:53:35.369647 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6ad1a4fc-e012-4ea2-baa2-db08b7b62df6-flexvol-driver-host\") pod \"calico-node-9fw5t\" (UID: \"6ad1a4fc-e012-4ea2-baa2-db08b7b62df6\") " pod="calico-system/calico-node-9fw5t" Jul 2 06:53:35.621491 kubelet[2344]: E0702 06:53:35.621447 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:35.622063 containerd[1272]: time="2024-07-02T06:53:35.621955433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9fw5t,Uid:6ad1a4fc-e012-4ea2-baa2-db08b7b62df6,Namespace:calico-system,Attempt:0,}" Jul 2 06:53:35.763734 containerd[1272]: time="2024-07-02T06:53:35.763245575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:53:35.763734 containerd[1272]: time="2024-07-02T06:53:35.763351395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:53:35.763734 containerd[1272]: time="2024-07-02T06:53:35.763373065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:53:35.763734 containerd[1272]: time="2024-07-02T06:53:35.763395397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:53:35.788024 systemd[1]: Started cri-containerd-42cea0f64879379ac366d05d90e38e140a6c799e73796e277e29d4ec2a653090.scope - libcontainer container 42cea0f64879379ac366d05d90e38e140a6c799e73796e277e29d4ec2a653090. Jul 2 06:53:35.809000 audit: BPF prog-id=125 op=LOAD Jul 2 06:53:35.809000 audit: BPF prog-id=126 op=LOAD Jul 2 06:53:35.809000 audit[3007]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2996 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:35.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432636561306636343837393337396163333636643035643930653338 Jul 2 06:53:35.809000 audit: BPF prog-id=127 op=LOAD Jul 2 06:53:35.809000 audit[3007]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2996 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:35.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432636561306636343837393337396163333636643035643930653338 Jul 2 06:53:35.809000 audit: BPF prog-id=127 op=UNLOAD Jul 2 06:53:35.809000 audit: BPF prog-id=126 op=UNLOAD Jul 2 06:53:35.809000 audit: BPF prog-id=128 op=LOAD Jul 2 06:53:35.809000 audit[3007]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2996 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:35.809000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3432636561306636343837393337396163333636643035643930653338 Jul 2 06:53:35.824658 containerd[1272]: time="2024-07-02T06:53:35.824602818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9fw5t,Uid:6ad1a4fc-e012-4ea2-baa2-db08b7b62df6,Namespace:calico-system,Attempt:0,} returns sandbox id \"42cea0f64879379ac366d05d90e38e140a6c799e73796e277e29d4ec2a653090\"" Jul 2 06:53:35.825570 kubelet[2344]: E0702 06:53:35.825529 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:35.827697 containerd[1272]: time="2024-07-02T06:53:35.827654824Z" level=info msg="CreateContainer within sandbox \"42cea0f64879379ac366d05d90e38e140a6c799e73796e277e29d4ec2a653090\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 06:53:35.855008 containerd[1272]: time="2024-07-02T06:53:35.854944667Z" level=info msg="CreateContainer within sandbox \"42cea0f64879379ac366d05d90e38e140a6c799e73796e277e29d4ec2a653090\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3b6843030f08fc6dd45150652b8fc265414921be22b79cff629430717a1fefd3\"" Jul 2 06:53:35.856147 containerd[1272]: time="2024-07-02T06:53:35.856081159Z" level=info msg="StartContainer for \"3b6843030f08fc6dd45150652b8fc265414921be22b79cff629430717a1fefd3\"" Jul 2 06:53:35.892046 systemd[1]: Started cri-containerd-3b6843030f08fc6dd45150652b8fc265414921be22b79cff629430717a1fefd3.scope - libcontainer container 3b6843030f08fc6dd45150652b8fc265414921be22b79cff629430717a1fefd3. Jul 2 06:53:35.910000 audit: BPF prog-id=129 op=LOAD Jul 2 06:53:35.910000 audit[3037]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2996 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:35.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362363834333033306630386663366464343531353036353262386663 Jul 2 06:53:35.910000 audit: BPF prog-id=130 op=LOAD Jul 2 06:53:35.910000 audit[3037]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2996 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:35.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362363834333033306630386663366464343531353036353262386663 Jul 2 06:53:35.910000 audit: BPF prog-id=130 op=UNLOAD Jul 2 06:53:35.910000 audit: BPF prog-id=129 op=UNLOAD Jul 2 06:53:35.910000 audit: BPF prog-id=131 op=LOAD Jul 2 06:53:35.910000 audit[3037]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2996 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:35.910000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3362363834333033306630386663366464343531353036353262386663 Jul 2 06:53:35.938579 systemd[1]: cri-containerd-3b6843030f08fc6dd45150652b8fc265414921be22b79cff629430717a1fefd3.scope: Deactivated successfully. Jul 2 06:53:35.943000 audit: BPF prog-id=131 op=UNLOAD Jul 2 06:53:35.986715 containerd[1272]: time="2024-07-02T06:53:35.986668637Z" level=info msg="StartContainer for \"3b6843030f08fc6dd45150652b8fc265414921be22b79cff629430717a1fefd3\" returns successfully" Jul 2 06:53:36.016050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b6843030f08fc6dd45150652b8fc265414921be22b79cff629430717a1fefd3-rootfs.mount: Deactivated successfully. Jul 2 06:53:36.395732 kubelet[2344]: I0702 06:53:36.395661 2344 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1df82370-ccd9-475f-baeb-01c95268358d" path="/var/lib/kubelet/pods/1df82370-ccd9-475f-baeb-01c95268358d/volumes" Jul 2 06:53:36.400474 kubelet[2344]: E0702 06:53:36.400446 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:36.869513 containerd[1272]: time="2024-07-02T06:53:36.869432046Z" level=info msg="shim disconnected" id=3b6843030f08fc6dd45150652b8fc265414921be22b79cff629430717a1fefd3 namespace=k8s.io Jul 2 06:53:36.869513 containerd[1272]: time="2024-07-02T06:53:36.869514020Z" level=warning msg="cleaning up after shim disconnected" id=3b6843030f08fc6dd45150652b8fc265414921be22b79cff629430717a1fefd3 namespace=k8s.io Jul 2 06:53:36.869513 containerd[1272]: time="2024-07-02T06:53:36.869525441Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:53:36.886624 containerd[1272]: time="2024-07-02T06:53:36.886542905Z" level=warning msg="cleanup warnings time=\"2024-07-02T06:53:36Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 06:53:36.957951 containerd[1272]: time="2024-07-02T06:53:36.957756999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:36.968916 containerd[1272]: time="2024-07-02T06:53:36.968822674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Jul 2 06:53:36.991819 containerd[1272]: time="2024-07-02T06:53:36.991002153Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:37.006880 containerd[1272]: time="2024-07-02T06:53:37.006815748Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:37.024151 containerd[1272]: time="2024-07-02T06:53:37.024093048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:37.024967 containerd[1272]: time="2024-07-02T06:53:37.024917935Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 3.146266975s" Jul 2 06:53:37.025044 containerd[1272]: time="2024-07-02T06:53:37.024965435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Jul 2 06:53:37.033497 containerd[1272]: time="2024-07-02T06:53:37.031601366Z" level=info msg="CreateContainer within sandbox \"b12af561a3204e01666f11269490a75e57630fbec7bbec5c34068325f0d1e7f7\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 2 06:53:37.151639 containerd[1272]: time="2024-07-02T06:53:37.151445045Z" level=info msg="CreateContainer within sandbox \"b12af561a3204e01666f11269490a75e57630fbec7bbec5c34068325f0d1e7f7\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2376cda784b7f58d03f59dec478a04c070adf243ec1c3d2b8de561c73bb7d5e2\"" Jul 2 06:53:37.152223 containerd[1272]: time="2024-07-02T06:53:37.152183940Z" level=info msg="StartContainer for \"2376cda784b7f58d03f59dec478a04c070adf243ec1c3d2b8de561c73bb7d5e2\"" Jul 2 06:53:37.181971 systemd[1]: Started cri-containerd-2376cda784b7f58d03f59dec478a04c070adf243ec1c3d2b8de561c73bb7d5e2.scope - libcontainer container 2376cda784b7f58d03f59dec478a04c070adf243ec1c3d2b8de561c73bb7d5e2. Jul 2 06:53:37.193000 audit: BPF prog-id=132 op=LOAD Jul 2 06:53:37.199907 kernel: kauditd_printk_skb: 70 callbacks suppressed Jul 2 06:53:37.200095 kernel: audit: type=1334 audit(1719903217.193:500): prog-id=132 op=LOAD Jul 2 06:53:37.200131 kernel: audit: type=1334 audit(1719903217.194:501): prog-id=133 op=LOAD Jul 2 06:53:37.200156 kernel: audit: type=1300 audit(1719903217.194:501): arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2842 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:37.194000 audit: BPF prog-id=133 op=LOAD Jul 2 06:53:37.194000 audit[3105]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2842 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:37.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233373663646137383462376635386430336635396465633437386130 Jul 2 06:53:37.205443 kernel: audit: type=1327 audit(1719903217.194:501): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233373663646137383462376635386430336635396465633437386130 Jul 2 06:53:37.205503 kernel: audit: type=1334 audit(1719903217.194:502): prog-id=134 op=LOAD Jul 2 06:53:37.194000 audit: BPF prog-id=134 op=LOAD Jul 2 06:53:37.194000 audit[3105]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2842 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:37.210265 kernel: audit: type=1300 audit(1719903217.194:502): arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2842 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:37.210334 kernel: audit: type=1327 audit(1719903217.194:502): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233373663646137383462376635386430336635396465633437386130 Jul 2 06:53:37.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233373663646137383462376635386430336635396465633437386130 Jul 2 06:53:37.194000 audit: BPF prog-id=134 op=UNLOAD Jul 2 06:53:37.215349 kernel: audit: type=1334 audit(1719903217.194:503): prog-id=134 op=UNLOAD Jul 2 06:53:37.215401 kernel: audit: type=1334 audit(1719903217.194:504): prog-id=133 op=UNLOAD Jul 2 06:53:37.194000 audit: BPF prog-id=133 op=UNLOAD Jul 2 06:53:37.216332 kernel: audit: type=1334 audit(1719903217.194:505): prog-id=135 op=LOAD Jul 2 06:53:37.194000 audit: BPF prog-id=135 op=LOAD Jul 2 06:53:37.194000 audit[3105]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=2842 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:37.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233373663646137383462376635386430336635396465633437386130 Jul 2 06:53:37.218590 kubelet[2344]: E0702 06:53:37.218544 2344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bdm7p" podUID="f47c1652-2b34-4c56-adf0-effec8bb0963" Jul 2 06:53:37.232458 containerd[1272]: time="2024-07-02T06:53:37.232398208Z" level=info msg="StartContainer for \"2376cda784b7f58d03f59dec478a04c070adf243ec1c3d2b8de561c73bb7d5e2\" returns successfully" Jul 2 06:53:37.405590 kubelet[2344]: E0702 06:53:37.405457 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:37.407606 kubelet[2344]: E0702 06:53:37.407585 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:37.415500 containerd[1272]: time="2024-07-02T06:53:37.415440113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 06:53:37.423105 kubelet[2344]: I0702 06:53:37.423060 2344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-9d6bdd58d-w9nmb" podStartSLOduration=1.634562171 podStartE2EDuration="6.423012182s" podCreationTimestamp="2024-07-02 06:53:31 +0000 UTC" firstStartedPulling="2024-07-02 06:53:32.236700902 +0000 UTC m=+20.108502134" lastFinishedPulling="2024-07-02 06:53:37.025150902 +0000 UTC m=+24.896952145" observedRunningTime="2024-07-02 06:53:37.422205528 +0000 UTC m=+25.294006780" watchObservedRunningTime="2024-07-02 06:53:37.423012182 +0000 UTC m=+25.294813414" Jul 2 06:53:38.408918 kubelet[2344]: I0702 06:53:38.408873 2344 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 06:53:38.409573 kubelet[2344]: E0702 06:53:38.409550 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:39.218428 kubelet[2344]: E0702 06:53:39.218362 2344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bdm7p" podUID="f47c1652-2b34-4c56-adf0-effec8bb0963" Jul 2 06:53:41.636603 kubelet[2344]: E0702 06:53:41.636541 2344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bdm7p" podUID="f47c1652-2b34-4c56-adf0-effec8bb0963" Jul 2 06:53:42.981033 systemd[1]: Started sshd@7-10.0.0.35:22-10.0.0.1:53604.service - OpenSSH per-connection server daemon (10.0.0.1:53604). Jul 2 06:53:42.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.35:22-10.0.0.1:53604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:53:42.981928 kernel: kauditd_printk_skb: 2 callbacks suppressed Jul 2 06:53:42.981977 kernel: audit: type=1130 audit(1719903222.980:506): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.35:22-10.0.0.1:53604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:53:43.219043 kubelet[2344]: E0702 06:53:43.218980 2344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bdm7p" podUID="f47c1652-2b34-4c56-adf0-effec8bb0963" Jul 2 06:53:43.321000 audit[3142]: USER_ACCT pid=3142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:43.322717 sshd[3142]: Accepted publickey for core from 10.0.0.1 port 53604 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:53:43.324152 sshd[3142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:53:43.322000 audit[3142]: CRED_ACQ pid=3142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:43.337823 systemd-logind[1264]: New session 8 of user core. Jul 2 06:53:43.340886 kernel: audit: type=1101 audit(1719903223.321:507): pid=3142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:43.340991 kernel: audit: type=1103 audit(1719903223.322:508): pid=3142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:43.341010 kernel: audit: type=1006 audit(1719903223.323:509): pid=3142 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jul 2 06:53:43.323000 audit[3142]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe08ebca10 a2=3 a3=7f18b7e74480 items=0 ppid=1 pid=3142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:43.345841 kernel: audit: type=1300 audit(1719903223.323:509): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe08ebca10 a2=3 a3=7f18b7e74480 items=0 ppid=1 pid=3142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:43.345877 kernel: audit: type=1327 audit(1719903223.323:509): proctitle=737368643A20636F7265205B707269765D Jul 2 06:53:43.323000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:53:43.349050 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 06:53:43.352000 audit[3142]: USER_START pid=3142 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:43.352000 audit[3144]: CRED_ACQ pid=3144 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:43.362125 kernel: audit: type=1105 audit(1719903223.352:510): pid=3142 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:43.362170 kernel: audit: type=1103 audit(1719903223.352:511): pid=3144 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:43.551593 sshd[3142]: pam_unix(sshd:session): session closed for user core Jul 2 06:53:43.551000 audit[3142]: USER_END pid=3142 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:43.554028 systemd[1]: sshd@7-10.0.0.35:22-10.0.0.1:53604.service: Deactivated successfully. Jul 2 06:53:43.554753 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 06:53:43.555669 systemd-logind[1264]: Session 8 logged out. Waiting for processes to exit. Jul 2 06:53:43.556345 systemd-logind[1264]: Removed session 8. Jul 2 06:53:43.551000 audit[3142]: CRED_DISP pid=3142 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:43.573518 kernel: audit: type=1106 audit(1719903223.551:512): pid=3142 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:43.573587 kernel: audit: type=1104 audit(1719903223.551:513): pid=3142 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:43.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.35:22-10.0.0.1:53604 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:53:44.126146 containerd[1272]: time="2024-07-02T06:53:44.126066218Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:44.154629 containerd[1272]: time="2024-07-02T06:53:44.154553448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 06:53:44.184382 containerd[1272]: time="2024-07-02T06:53:44.184301924Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:44.244459 containerd[1272]: time="2024-07-02T06:53:44.244398534Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:44.290099 containerd[1272]: time="2024-07-02T06:53:44.290039454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:44.290719 containerd[1272]: time="2024-07-02T06:53:44.290680120Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 6.87519363s" Jul 2 06:53:44.290719 containerd[1272]: time="2024-07-02T06:53:44.290714878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 06:53:44.293126 containerd[1272]: time="2024-07-02T06:53:44.293088210Z" level=info msg="CreateContainer within sandbox \"42cea0f64879379ac366d05d90e38e140a6c799e73796e277e29d4ec2a653090\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 06:53:44.426064 containerd[1272]: time="2024-07-02T06:53:44.425900864Z" level=info msg="CreateContainer within sandbox \"42cea0f64879379ac366d05d90e38e140a6c799e73796e277e29d4ec2a653090\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"17d416357bdb1eee45bf4f7da9e18cc8da7ec6af0a20e3dc2a1812c59b066ada\"" Jul 2 06:53:44.426554 containerd[1272]: time="2024-07-02T06:53:44.426502626Z" level=info msg="StartContainer for \"17d416357bdb1eee45bf4f7da9e18cc8da7ec6af0a20e3dc2a1812c59b066ada\"" Jul 2 06:53:44.458999 systemd[1]: Started cri-containerd-17d416357bdb1eee45bf4f7da9e18cc8da7ec6af0a20e3dc2a1812c59b066ada.scope - libcontainer container 17d416357bdb1eee45bf4f7da9e18cc8da7ec6af0a20e3dc2a1812c59b066ada. Jul 2 06:53:44.469000 audit: BPF prog-id=136 op=LOAD Jul 2 06:53:44.469000 audit[3168]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139988 a2=78 a3=0 items=0 ppid=2996 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:44.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137643431363335376264623165656534356266346637646139653138 Jul 2 06:53:44.469000 audit: BPF prog-id=137 op=LOAD Jul 2 06:53:44.469000 audit[3168]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000139720 a2=78 a3=0 items=0 ppid=2996 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:44.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137643431363335376264623165656534356266346637646139653138 Jul 2 06:53:44.469000 audit: BPF prog-id=137 op=UNLOAD Jul 2 06:53:44.469000 audit: BPF prog-id=136 op=UNLOAD Jul 2 06:53:44.469000 audit: BPF prog-id=138 op=LOAD Jul 2 06:53:44.469000 audit[3168]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000139be0 a2=78 a3=0 items=0 ppid=2996 pid=3168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:44.469000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3137643431363335376264623165656534356266346637646139653138 Jul 2 06:53:44.483984 containerd[1272]: time="2024-07-02T06:53:44.483917768Z" level=info msg="StartContainer for \"17d416357bdb1eee45bf4f7da9e18cc8da7ec6af0a20e3dc2a1812c59b066ada\" returns successfully" Jul 2 06:53:45.218924 kubelet[2344]: E0702 06:53:45.218873 2344 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bdm7p" podUID="f47c1652-2b34-4c56-adf0-effec8bb0963" Jul 2 06:53:45.425755 kubelet[2344]: E0702 06:53:45.425721 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:46.062912 systemd[1]: cri-containerd-17d416357bdb1eee45bf4f7da9e18cc8da7ec6af0a20e3dc2a1812c59b066ada.scope: Deactivated successfully. Jul 2 06:53:46.068000 audit: BPF prog-id=138 op=UNLOAD Jul 2 06:53:46.087018 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17d416357bdb1eee45bf4f7da9e18cc8da7ec6af0a20e3dc2a1812c59b066ada-rootfs.mount: Deactivated successfully. Jul 2 06:53:46.134383 kubelet[2344]: I0702 06:53:46.134347 2344 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 06:53:46.303317 kubelet[2344]: I0702 06:53:46.302658 2344 topology_manager.go:215] "Topology Admit Handler" podUID="b7affa88-3cb8-490c-b055-74e0023a3b4f" podNamespace="kube-system" podName="coredns-76f75df574-z6lht" Jul 2 06:53:46.305365 kubelet[2344]: I0702 06:53:46.305330 2344 topology_manager.go:215] "Topology Admit Handler" podUID="d5281e8f-feca-4ab3-9b29-c3d038aed0d0" podNamespace="calico-system" podName="calico-kube-controllers-7bf5c69fc4-6jthk" Jul 2 06:53:46.305746 kubelet[2344]: I0702 06:53:46.305699 2344 topology_manager.go:215] "Topology Admit Handler" podUID="d461e7f3-5eb3-4f3a-bd28-01c916db29c2" podNamespace="kube-system" podName="coredns-76f75df574-st82c" Jul 2 06:53:46.341125 kubelet[2344]: I0702 06:53:46.340967 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8g6c\" (UniqueName: \"kubernetes.io/projected/d5281e8f-feca-4ab3-9b29-c3d038aed0d0-kube-api-access-z8g6c\") pod \"calico-kube-controllers-7bf5c69fc4-6jthk\" (UID: \"d5281e8f-feca-4ab3-9b29-c3d038aed0d0\") " pod="calico-system/calico-kube-controllers-7bf5c69fc4-6jthk" Jul 2 06:53:46.341125 kubelet[2344]: I0702 06:53:46.341022 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxj94\" (UniqueName: \"kubernetes.io/projected/d461e7f3-5eb3-4f3a-bd28-01c916db29c2-kube-api-access-dxj94\") pod \"coredns-76f75df574-st82c\" (UID: \"d461e7f3-5eb3-4f3a-bd28-01c916db29c2\") " pod="kube-system/coredns-76f75df574-st82c" Jul 2 06:53:46.341125 kubelet[2344]: I0702 06:53:46.341053 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjzmh\" (UniqueName: \"kubernetes.io/projected/b7affa88-3cb8-490c-b055-74e0023a3b4f-kube-api-access-mjzmh\") pod \"coredns-76f75df574-z6lht\" (UID: \"b7affa88-3cb8-490c-b055-74e0023a3b4f\") " pod="kube-system/coredns-76f75df574-z6lht" Jul 2 06:53:46.341125 kubelet[2344]: I0702 06:53:46.341085 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7affa88-3cb8-490c-b055-74e0023a3b4f-config-volume\") pod \"coredns-76f75df574-z6lht\" (UID: \"b7affa88-3cb8-490c-b055-74e0023a3b4f\") " pod="kube-system/coredns-76f75df574-z6lht" Jul 2 06:53:46.341345 kubelet[2344]: I0702 06:53:46.341168 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5281e8f-feca-4ab3-9b29-c3d038aed0d0-tigera-ca-bundle\") pod \"calico-kube-controllers-7bf5c69fc4-6jthk\" (UID: \"d5281e8f-feca-4ab3-9b29-c3d038aed0d0\") " pod="calico-system/calico-kube-controllers-7bf5c69fc4-6jthk" Jul 2 06:53:46.341345 kubelet[2344]: I0702 06:53:46.341301 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d461e7f3-5eb3-4f3a-bd28-01c916db29c2-config-volume\") pod \"coredns-76f75df574-st82c\" (UID: \"d461e7f3-5eb3-4f3a-bd28-01c916db29c2\") " pod="kube-system/coredns-76f75df574-st82c" Jul 2 06:53:46.513559 containerd[1272]: time="2024-07-02T06:53:46.513466781Z" level=info msg="shim disconnected" id=17d416357bdb1eee45bf4f7da9e18cc8da7ec6af0a20e3dc2a1812c59b066ada namespace=k8s.io Jul 2 06:53:46.513559 containerd[1272]: time="2024-07-02T06:53:46.513547346Z" level=warning msg="cleaning up after shim disconnected" id=17d416357bdb1eee45bf4f7da9e18cc8da7ec6af0a20e3dc2a1812c59b066ada namespace=k8s.io Jul 2 06:53:46.513559 containerd[1272]: time="2024-07-02T06:53:46.513559610Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 06:53:46.519386 systemd[1]: Created slice kubepods-burstable-podb7affa88_3cb8_490c_b055_74e0023a3b4f.slice - libcontainer container kubepods-burstable-podb7affa88_3cb8_490c_b055_74e0023a3b4f.slice. Jul 2 06:53:46.536477 kubelet[2344]: E0702 06:53:46.536444 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:46.538558 containerd[1272]: time="2024-07-02T06:53:46.538518283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z6lht,Uid:b7affa88-3cb8-490c-b055-74e0023a3b4f,Namespace:kube-system,Attempt:0,}" Jul 2 06:53:46.540463 systemd[1]: Created slice kubepods-besteffort-podd5281e8f_feca_4ab3_9b29_c3d038aed0d0.slice - libcontainer container kubepods-besteffort-podd5281e8f_feca_4ab3_9b29_c3d038aed0d0.slice. Jul 2 06:53:46.542622 containerd[1272]: time="2024-07-02T06:53:46.542580043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bf5c69fc4-6jthk,Uid:d5281e8f-feca-4ab3-9b29-c3d038aed0d0,Namespace:calico-system,Attempt:0,}" Jul 2 06:53:46.544885 systemd[1]: Created slice kubepods-burstable-podd461e7f3_5eb3_4f3a_bd28_01c916db29c2.slice - libcontainer container kubepods-burstable-podd461e7f3_5eb3_4f3a_bd28_01c916db29c2.slice. Jul 2 06:53:46.546623 kubelet[2344]: E0702 06:53:46.546599 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:46.547027 containerd[1272]: time="2024-07-02T06:53:46.546991949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-st82c,Uid:d461e7f3-5eb3-4f3a-bd28-01c916db29c2,Namespace:kube-system,Attempt:0,}" Jul 2 06:53:46.647275 containerd[1272]: time="2024-07-02T06:53:46.647115449Z" level=error msg="Failed to destroy network for sandbox \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:46.647916 containerd[1272]: time="2024-07-02T06:53:46.647887828Z" level=error msg="encountered an error cleaning up failed sandbox \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:46.647996 containerd[1272]: time="2024-07-02T06:53:46.647948114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bf5c69fc4-6jthk,Uid:d5281e8f-feca-4ab3-9b29-c3d038aed0d0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:46.648125 containerd[1272]: time="2024-07-02T06:53:46.648101540Z" level=error msg="Failed to destroy network for sandbox \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:46.648282 kubelet[2344]: E0702 06:53:46.648239 2344 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:46.648358 kubelet[2344]: E0702 06:53:46.648326 2344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bf5c69fc4-6jthk" Jul 2 06:53:46.648358 kubelet[2344]: E0702 06:53:46.648355 2344 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bf5c69fc4-6jthk" Jul 2 06:53:46.648448 containerd[1272]: time="2024-07-02T06:53:46.648377372Z" level=error msg="encountered an error cleaning up failed sandbox \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:46.648448 containerd[1272]: time="2024-07-02T06:53:46.648433820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-st82c,Uid:d461e7f3-5eb3-4f3a-bd28-01c916db29c2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:46.648513 kubelet[2344]: E0702 06:53:46.648458 2344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bf5c69fc4-6jthk_calico-system(d5281e8f-feca-4ab3-9b29-c3d038aed0d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bf5c69fc4-6jthk_calico-system(d5281e8f-feca-4ab3-9b29-c3d038aed0d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bf5c69fc4-6jthk" podUID="d5281e8f-feca-4ab3-9b29-c3d038aed0d0" Jul 2 06:53:46.648801 kubelet[2344]: E0702 06:53:46.648769 2344 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:46.648861 kubelet[2344]: E0702 06:53:46.648825 2344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-st82c" Jul 2 06:53:46.648861 kubelet[2344]: E0702 06:53:46.648850 2344 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-st82c" Jul 2 06:53:46.648929 kubelet[2344]: E0702 06:53:46.648893 2344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-st82c_kube-system(d461e7f3-5eb3-4f3a-bd28-01c916db29c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-st82c_kube-system(d461e7f3-5eb3-4f3a-bd28-01c916db29c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-st82c" podUID="d461e7f3-5eb3-4f3a-bd28-01c916db29c2" Jul 2 06:53:46.656575 containerd[1272]: time="2024-07-02T06:53:46.656495100Z" level=error msg="Failed to destroy network for sandbox \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:46.657056 containerd[1272]: time="2024-07-02T06:53:46.657014321Z" level=error msg="encountered an error cleaning up failed sandbox \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:46.657101 containerd[1272]: time="2024-07-02T06:53:46.657079547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z6lht,Uid:b7affa88-3cb8-490c-b055-74e0023a3b4f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:46.657666 kubelet[2344]: E0702 06:53:46.657332 2344 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:46.657666 kubelet[2344]: E0702 06:53:46.657391 2344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z6lht" Jul 2 06:53:46.657666 kubelet[2344]: E0702 06:53:46.657429 2344 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-z6lht" Jul 2 06:53:46.657830 kubelet[2344]: E0702 06:53:46.657498 2344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-z6lht_kube-system(b7affa88-3cb8-490c-b055-74e0023a3b4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-z6lht_kube-system(b7affa88-3cb8-490c-b055-74e0023a3b4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-z6lht" podUID="b7affa88-3cb8-490c-b055-74e0023a3b4f" Jul 2 06:53:47.088122 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a-shm.mount: Deactivated successfully. Jul 2 06:53:47.223145 systemd[1]: Created slice kubepods-besteffort-podf47c1652_2b34_4c56_adf0_effec8bb0963.slice - libcontainer container kubepods-besteffort-podf47c1652_2b34_4c56_adf0_effec8bb0963.slice. Jul 2 06:53:47.227192 containerd[1272]: time="2024-07-02T06:53:47.227061012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bdm7p,Uid:f47c1652-2b34-4c56-adf0-effec8bb0963,Namespace:calico-system,Attempt:0,}" Jul 2 06:53:47.429451 kubelet[2344]: I0702 06:53:47.429336 2344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Jul 2 06:53:47.429985 containerd[1272]: time="2024-07-02T06:53:47.429954294Z" level=info msg="StopPodSandbox for \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\"" Jul 2 06:53:47.430249 containerd[1272]: time="2024-07-02T06:53:47.430230235Z" level=info msg="Ensure that sandbox 8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc in task-service has been cleanup successfully" Jul 2 06:53:47.430570 kubelet[2344]: I0702 06:53:47.430551 2344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Jul 2 06:53:47.430940 containerd[1272]: time="2024-07-02T06:53:47.430904324Z" level=info msg="StopPodSandbox for \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\"" Jul 2 06:53:47.431129 containerd[1272]: time="2024-07-02T06:53:47.431103407Z" level=info msg="Ensure that sandbox 833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a in task-service has been cleanup successfully" Jul 2 06:53:47.433418 kubelet[2344]: E0702 06:53:47.433381 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:47.434470 containerd[1272]: time="2024-07-02T06:53:47.434258074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 06:53:47.434931 kubelet[2344]: I0702 06:53:47.434899 2344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Jul 2 06:53:47.435347 containerd[1272]: time="2024-07-02T06:53:47.435326582Z" level=info msg="StopPodSandbox for \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\"" Jul 2 06:53:47.435567 containerd[1272]: time="2024-07-02T06:53:47.435551406Z" level=info msg="Ensure that sandbox ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453 in task-service has been cleanup successfully" Jul 2 06:53:47.455877 containerd[1272]: time="2024-07-02T06:53:47.455753300Z" level=error msg="StopPodSandbox for \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\" failed" error="failed to destroy network for sandbox \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:47.456175 kubelet[2344]: E0702 06:53:47.456145 2344 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Jul 2 06:53:47.456253 kubelet[2344]: E0702 06:53:47.456221 2344 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a"} Jul 2 06:53:47.456289 kubelet[2344]: E0702 06:53:47.456258 2344 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7affa88-3cb8-490c-b055-74e0023a3b4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 06:53:47.456289 kubelet[2344]: E0702 06:53:47.456288 2344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7affa88-3cb8-490c-b055-74e0023a3b4f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-z6lht" podUID="b7affa88-3cb8-490c-b055-74e0023a3b4f" Jul 2 06:53:47.456934 containerd[1272]: time="2024-07-02T06:53:47.456897033Z" level=error msg="StopPodSandbox for \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\" failed" error="failed to destroy network for sandbox \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:47.457082 kubelet[2344]: E0702 06:53:47.457054 2344 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Jul 2 06:53:47.457082 kubelet[2344]: E0702 06:53:47.457081 2344 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc"} Jul 2 06:53:47.457158 kubelet[2344]: E0702 06:53:47.457114 2344 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d461e7f3-5eb3-4f3a-bd28-01c916db29c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 06:53:47.457158 kubelet[2344]: E0702 06:53:47.457134 2344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d461e7f3-5eb3-4f3a-bd28-01c916db29c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-st82c" podUID="d461e7f3-5eb3-4f3a-bd28-01c916db29c2" Jul 2 06:53:47.469777 containerd[1272]: time="2024-07-02T06:53:47.469704244Z" level=error msg="StopPodSandbox for \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\" failed" error="failed to destroy network for sandbox \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:47.469982 kubelet[2344]: E0702 06:53:47.469961 2344 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Jul 2 06:53:47.470048 kubelet[2344]: E0702 06:53:47.470002 2344 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453"} Jul 2 06:53:47.470048 kubelet[2344]: E0702 06:53:47.470039 2344 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d5281e8f-feca-4ab3-9b29-c3d038aed0d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 06:53:47.470155 kubelet[2344]: E0702 06:53:47.470072 2344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d5281e8f-feca-4ab3-9b29-c3d038aed0d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bf5c69fc4-6jthk" podUID="d5281e8f-feca-4ab3-9b29-c3d038aed0d0" Jul 2 06:53:48.567223 systemd[1]: Started sshd@8-10.0.0.35:22-10.0.0.1:53610.service - OpenSSH per-connection server daemon (10.0.0.1:53610). Jul 2 06:53:48.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.35:22-10.0.0.1:53610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:53:48.576600 kernel: kauditd_printk_skb: 13 callbacks suppressed Jul 2 06:53:48.577841 kernel: audit: type=1130 audit(1719903228.566:521): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.35:22-10.0.0.1:53610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:53:48.726000 audit[3410]: USER_ACCT pid=3410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:48.728050 sshd[3410]: Accepted publickey for core from 10.0.0.1 port 53610 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:53:48.729815 sshd[3410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:53:48.788245 kernel: audit: type=1101 audit(1719903228.726:522): pid=3410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:48.788358 kernel: audit: type=1103 audit(1719903228.728:523): pid=3410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:48.728000 audit[3410]: CRED_ACQ pid=3410 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:48.793456 kernel: audit: type=1006 audit(1719903228.728:524): pid=3410 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jul 2 06:53:48.793583 kernel: audit: type=1300 audit(1719903228.728:524): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc542c77e0 a2=3 a3=7f86258c3480 items=0 ppid=1 pid=3410 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:48.728000 audit[3410]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc542c77e0 a2=3 a3=7f86258c3480 items=0 ppid=1 pid=3410 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:48.796234 systemd-logind[1264]: New session 9 of user core. Jul 2 06:53:48.797373 kernel: audit: type=1327 audit(1719903228.728:524): proctitle=737368643A20636F7265205B707269765D Jul 2 06:53:48.728000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:53:48.804223 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 06:53:48.809000 audit[3410]: USER_START pid=3410 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:48.810000 audit[3412]: CRED_ACQ pid=3412 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:48.836724 kernel: audit: type=1105 audit(1719903228.809:525): pid=3410 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:48.836929 kernel: audit: type=1103 audit(1719903228.810:526): pid=3412 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:48.885637 containerd[1272]: time="2024-07-02T06:53:48.885544789Z" level=error msg="Failed to destroy network for sandbox \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:48.886113 containerd[1272]: time="2024-07-02T06:53:48.886033339Z" level=error msg="encountered an error cleaning up failed sandbox \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:48.886113 containerd[1272]: time="2024-07-02T06:53:48.886098826Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bdm7p,Uid:f47c1652-2b34-4c56-adf0-effec8bb0963,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:48.886410 kubelet[2344]: E0702 06:53:48.886366 2344 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:48.886700 kubelet[2344]: E0702 06:53:48.886446 2344 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bdm7p" Jul 2 06:53:48.886700 kubelet[2344]: E0702 06:53:48.886484 2344 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bdm7p" Jul 2 06:53:48.886700 kubelet[2344]: E0702 06:53:48.886548 2344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bdm7p_calico-system(f47c1652-2b34-4c56-adf0-effec8bb0963)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bdm7p_calico-system(f47c1652-2b34-4c56-adf0-effec8bb0963)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bdm7p" podUID="f47c1652-2b34-4c56-adf0-effec8bb0963" Jul 2 06:53:48.887833 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f-shm.mount: Deactivated successfully. Jul 2 06:53:48.947979 sshd[3410]: pam_unix(sshd:session): session closed for user core Jul 2 06:53:48.948000 audit[3410]: USER_END pid=3410 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:48.950980 systemd[1]: sshd@8-10.0.0.35:22-10.0.0.1:53610.service: Deactivated successfully. Jul 2 06:53:48.951841 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 06:53:48.952449 systemd-logind[1264]: Session 9 logged out. Waiting for processes to exit. Jul 2 06:53:48.953316 systemd-logind[1264]: Removed session 9. Jul 2 06:53:48.948000 audit[3410]: CRED_DISP pid=3410 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:48.959441 kernel: audit: type=1106 audit(1719903228.948:527): pid=3410 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:48.959509 kernel: audit: type=1104 audit(1719903228.948:528): pid=3410 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:48.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.35:22-10.0.0.1:53610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:53:49.439653 kubelet[2344]: I0702 06:53:49.439598 2344 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Jul 2 06:53:49.440201 containerd[1272]: time="2024-07-02T06:53:49.440155920Z" level=info msg="StopPodSandbox for \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\"" Jul 2 06:53:49.440396 containerd[1272]: time="2024-07-02T06:53:49.440359832Z" level=info msg="Ensure that sandbox 67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f in task-service has been cleanup successfully" Jul 2 06:53:49.467295 containerd[1272]: time="2024-07-02T06:53:49.467193679Z" level=error msg="StopPodSandbox for \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\" failed" error="failed to destroy network for sandbox \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 06:53:49.467585 kubelet[2344]: E0702 06:53:49.467528 2344 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Jul 2 06:53:49.467680 kubelet[2344]: E0702 06:53:49.467602 2344 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f"} Jul 2 06:53:49.467680 kubelet[2344]: E0702 06:53:49.467652 2344 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f47c1652-2b34-4c56-adf0-effec8bb0963\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 06:53:49.467818 kubelet[2344]: E0702 06:53:49.467692 2344 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f47c1652-2b34-4c56-adf0-effec8bb0963\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bdm7p" podUID="f47c1652-2b34-4c56-adf0-effec8bb0963" Jul 2 06:53:53.976342 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 06:53:53.980874 kernel: audit: type=1130 audit(1719903233.955:530): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.35:22-10.0.0.1:34330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:53:53.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.35:22-10.0.0.1:34330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:53:53.956866 systemd[1]: Started sshd@9-10.0.0.35:22-10.0.0.1:34330.service - OpenSSH per-connection server daemon (10.0.0.1:34330). Jul 2 06:53:54.073000 audit[3487]: USER_ACCT pid=3487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:54.075074 sshd[3487]: Accepted publickey for core from 10.0.0.1 port 34330 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:53:54.076750 sshd[3487]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:53:54.074000 audit[3487]: CRED_ACQ pid=3487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:54.082258 systemd-logind[1264]: New session 10 of user core. Jul 2 06:53:54.161106 kernel: audit: type=1101 audit(1719903234.073:531): pid=3487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:54.161146 kernel: audit: type=1103 audit(1719903234.074:532): pid=3487 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:54.161171 kernel: audit: type=1006 audit(1719903234.074:533): pid=3487 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jul 2 06:53:54.161188 kernel: audit: type=1300 audit(1719903234.074:533): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb55539b0 a2=3 a3=7f4088ace480 items=0 ppid=1 pid=3487 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:54.161210 kernel: audit: type=1327 audit(1719903234.074:533): proctitle=737368643A20636F7265205B707269765D Jul 2 06:53:54.074000 audit[3487]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeb55539b0 a2=3 a3=7f4088ace480 items=0 ppid=1 pid=3487 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:54.074000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:53:54.161036 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 06:53:54.164000 audit[3487]: USER_START pid=3487 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:54.165000 audit[3489]: CRED_ACQ pid=3489 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:54.248056 kernel: audit: type=1105 audit(1719903234.164:534): pid=3487 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:54.248151 kernel: audit: type=1103 audit(1719903234.165:535): pid=3489 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:54.472462 sshd[3487]: pam_unix(sshd:session): session closed for user core Jul 2 06:53:54.471000 audit[3487]: USER_END pid=3487 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:54.475909 systemd[1]: sshd@9-10.0.0.35:22-10.0.0.1:34330.service: Deactivated successfully. Jul 2 06:53:54.476769 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 06:53:54.477436 systemd-logind[1264]: Session 10 logged out. Waiting for processes to exit. Jul 2 06:53:54.478408 systemd-logind[1264]: Removed session 10. Jul 2 06:53:54.558863 kernel: audit: type=1106 audit(1719903234.471:536): pid=3487 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:54.558984 kernel: audit: type=1104 audit(1719903234.472:537): pid=3487 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:54.472000 audit[3487]: CRED_DISP pid=3487 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:54.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.35:22-10.0.0.1:34330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:53:54.930113 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount971700130.mount: Deactivated successfully. Jul 2 06:53:55.607146 kubelet[2344]: I0702 06:53:55.607065 2344 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 06:53:55.607933 kubelet[2344]: E0702 06:53:55.607870 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:56.452805 kubelet[2344]: E0702 06:53:56.452751 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:56.646397 containerd[1272]: time="2024-07-02T06:53:56.646289764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:56.653771 containerd[1272]: time="2024-07-02T06:53:56.653633850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 06:53:56.657326 containerd[1272]: time="2024-07-02T06:53:56.657234506Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:56.661271 containerd[1272]: time="2024-07-02T06:53:56.661146940Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:56.666995 containerd[1272]: time="2024-07-02T06:53:56.666901141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:53:56.667566 containerd[1272]: time="2024-07-02T06:53:56.667489548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 9.232899975s" Jul 2 06:53:56.667566 containerd[1272]: time="2024-07-02T06:53:56.667553300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 06:53:56.677000 audit[3507]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=3507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:53:56.677000 audit[3507]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff4b4c3e80 a2=0 a3=7fff4b4c3e6c items=0 ppid=2535 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:56.677000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:53:56.680471 containerd[1272]: time="2024-07-02T06:53:56.679361527Z" level=info msg="CreateContainer within sandbox \"42cea0f64879379ac366d05d90e38e140a6c799e73796e277e29d4ec2a653090\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 06:53:56.678000 audit[3507]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3507 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:53:56.678000 audit[3507]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff4b4c3e80 a2=0 a3=7fff4b4c3e6c items=0 ppid=2535 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:56.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:53:56.724677 containerd[1272]: time="2024-07-02T06:53:56.724420267Z" level=info msg="CreateContainer within sandbox \"42cea0f64879379ac366d05d90e38e140a6c799e73796e277e29d4ec2a653090\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1312c2c46e61b81e135ed1cf9cf961f2ac8c84cb932cd8b612ee7cf24c8fb322\"" Jul 2 06:53:56.725462 containerd[1272]: time="2024-07-02T06:53:56.725394774Z" level=info msg="StartContainer for \"1312c2c46e61b81e135ed1cf9cf961f2ac8c84cb932cd8b612ee7cf24c8fb322\"" Jul 2 06:53:56.851117 systemd[1]: Started cri-containerd-1312c2c46e61b81e135ed1cf9cf961f2ac8c84cb932cd8b612ee7cf24c8fb322.scope - libcontainer container 1312c2c46e61b81e135ed1cf9cf961f2ac8c84cb932cd8b612ee7cf24c8fb322. Jul 2 06:53:56.869000 audit: BPF prog-id=139 op=LOAD Jul 2 06:53:56.869000 audit[3517]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=2996 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:56.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133313263326334366536316238316531333565643163663963663936 Jul 2 06:53:56.869000 audit: BPF prog-id=140 op=LOAD Jul 2 06:53:56.869000 audit[3517]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=2996 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:56.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133313263326334366536316238316531333565643163663963663936 Jul 2 06:53:56.869000 audit: BPF prog-id=140 op=UNLOAD Jul 2 06:53:56.869000 audit: BPF prog-id=139 op=UNLOAD Jul 2 06:53:56.869000 audit: BPF prog-id=141 op=LOAD Jul 2 06:53:56.869000 audit[3517]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=2996 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:56.869000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133313263326334366536316238316531333565643163663963663936 Jul 2 06:53:56.893408 containerd[1272]: time="2024-07-02T06:53:56.893130638Z" level=info msg="StartContainer for \"1312c2c46e61b81e135ed1cf9cf961f2ac8c84cb932cd8b612ee7cf24c8fb322\" returns successfully" Jul 2 06:53:56.997919 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 06:53:56.998177 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 06:53:57.456826 kubelet[2344]: E0702 06:53:57.456762 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:53:59.482333 systemd[1]: Started sshd@10-10.0.0.35:22-10.0.0.1:34344.service - OpenSSH per-connection server daemon (10.0.0.1:34344). Jul 2 06:53:59.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.35:22-10.0.0.1:34344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:53:59.495925 kernel: kauditd_printk_skb: 18 callbacks suppressed Jul 2 06:53:59.496107 kernel: audit: type=1130 audit(1719903239.481:546): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.35:22-10.0.0.1:34344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:53:59.534000 audit[3565]: USER_ACCT pid=3565 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:59.536063 sshd[3565]: Accepted publickey for core from 10.0.0.1 port 34344 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:53:59.584507 sshd[3565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:53:59.535000 audit[3565]: CRED_ACQ pid=3565 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:59.588769 systemd-logind[1264]: New session 11 of user core. Jul 2 06:53:59.591997 kernel: audit: type=1101 audit(1719903239.534:547): pid=3565 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:59.592075 kernel: audit: type=1103 audit(1719903239.535:548): pid=3565 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:59.592097 kernel: audit: type=1006 audit(1719903239.535:549): pid=3565 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jul 2 06:53:59.594236 kernel: audit: type=1300 audit(1719903239.535:549): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffddb061210 a2=3 a3=7f7584e8d480 items=0 ppid=1 pid=3565 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:59.535000 audit[3565]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffddb061210 a2=3 a3=7f7584e8d480 items=0 ppid=1 pid=3565 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:53:59.535000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:53:59.633381 kernel: audit: type=1327 audit(1719903239.535:549): proctitle=737368643A20636F7265205B707269765D Jul 2 06:53:59.643285 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 06:53:59.647000 audit[3565]: USER_START pid=3565 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:59.649000 audit[3567]: CRED_ACQ pid=3567 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:59.728146 kernel: audit: type=1105 audit(1719903239.647:550): pid=3565 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:59.728330 kernel: audit: type=1103 audit(1719903239.649:551): pid=3567 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:59.813411 sshd[3565]: pam_unix(sshd:session): session closed for user core Jul 2 06:53:59.813000 audit[3565]: USER_END pid=3565 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:59.816731 systemd[1]: sshd@10-10.0.0.35:22-10.0.0.1:34344.service: Deactivated successfully. Jul 2 06:53:59.817612 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 06:53:59.818827 systemd-logind[1264]: Session 11 logged out. Waiting for processes to exit. Jul 2 06:53:59.819601 systemd-logind[1264]: Removed session 11. Jul 2 06:53:59.813000 audit[3565]: CRED_DISP pid=3565 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:59.864980 kernel: audit: type=1106 audit(1719903239.813:552): pid=3565 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:59.865065 kernel: audit: type=1104 audit(1719903239.813:553): pid=3565 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:53:59.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.35:22-10.0.0.1:34344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:01.104000 audit[3621]: AVC avc: denied { write } for pid=3621 comm="tee" name="fd" dev="proc" ino=26718 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:54:01.104000 audit[3621]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffed8285a2e a2=241 a3=1b6 items=1 ppid=3598 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.104000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 2 06:54:01.104000 audit: PATH item=0 name="/dev/fd/63" inode=25091 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:54:01.104000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:54:01.105000 audit[3627]: AVC avc: denied { write } for pid=3627 comm="tee" name="fd" dev="proc" ino=25097 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:54:01.105000 audit[3627]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffefa77ea2e a2=241 a3=1b6 items=1 ppid=3603 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.105000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 2 06:54:01.105000 audit: PATH item=0 name="/dev/fd/63" inode=25094 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:54:01.105000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:54:01.108000 audit[3635]: AVC avc: denied { write } for pid=3635 comm="tee" name="fd" dev="proc" ino=25931 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:54:01.108000 audit[3635]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd9d1b2a2f a2=241 a3=1b6 items=1 ppid=3602 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.108000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 2 06:54:01.108000 audit: PATH item=0 name="/dev/fd/63" inode=24534 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:54:01.108000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:54:01.110000 audit[3640]: AVC avc: denied { write } for pid=3640 comm="tee" name="fd" dev="proc" ino=25105 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:54:01.110000 audit[3640]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffff63c4a2e a2=241 a3=1b6 items=1 ppid=3604 pid=3640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.110000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 2 06:54:01.110000 audit: PATH item=0 name="/dev/fd/63" inode=24535 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:54:01.110000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:54:01.111000 audit[3642]: AVC avc: denied { write } for pid=3642 comm="tee" name="fd" dev="proc" ino=26726 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:54:01.111000 audit[3642]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe5e9b4a1e a2=241 a3=1b6 items=1 ppid=3606 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.111000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 2 06:54:01.111000 audit: PATH item=0 name="/dev/fd/63" inode=25101 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:54:01.111000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:54:01.116000 audit[3646]: AVC avc: denied { write } for pid=3646 comm="tee" name="fd" dev="proc" ino=25937 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:54:01.116000 audit[3646]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcc4ef0a30 a2=241 a3=1b6 items=1 ppid=3597 pid=3646 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.116000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 2 06:54:01.116000 audit: PATH item=0 name="/dev/fd/63" inode=25102 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:54:01.116000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:54:01.118000 audit[3629]: AVC avc: denied { write } for pid=3629 comm="tee" name="fd" dev="proc" ino=26738 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 2 06:54:01.118000 audit[3629]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe6ad01a1f a2=241 a3=1b6 items=1 ppid=3601 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.118000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 2 06:54:01.118000 audit: PATH item=0 name="/dev/fd/63" inode=26715 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 06:54:01.118000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 2 06:54:01.219393 containerd[1272]: time="2024-07-02T06:54:01.219340467Z" level=info msg="StopPodSandbox for \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\"" Jul 2 06:54:01.220310 containerd[1272]: time="2024-07-02T06:54:01.220278449Z" level=info msg="StopPodSandbox for \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\"" Jul 2 06:54:01.220532 containerd[1272]: time="2024-07-02T06:54:01.220507326Z" level=info msg="StopPodSandbox for \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\"" Jul 2 06:54:01.433507 systemd-networkd[1104]: vxlan.calico: Link UP Jul 2 06:54:01.433517 systemd-networkd[1104]: vxlan.calico: Gained carrier Jul 2 06:54:01.455000 audit: BPF prog-id=142 op=LOAD Jul 2 06:54:01.455000 audit[3817]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffefb9dcfa0 a2=70 a3=7f329d780000 items=0 ppid=3652 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.455000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 06:54:01.455000 audit: BPF prog-id=142 op=UNLOAD Jul 2 06:54:01.455000 audit: BPF prog-id=143 op=LOAD Jul 2 06:54:01.455000 audit[3817]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffefb9dcfa0 a2=70 a3=6f items=0 ppid=3652 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.455000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 06:54:01.455000 audit: BPF prog-id=143 op=UNLOAD Jul 2 06:54:01.455000 audit: BPF prog-id=144 op=LOAD Jul 2 06:54:01.455000 audit[3817]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=5 a1=7ffefb9dcf30 a2=70 a3=7ffefb9dcfa0 items=0 ppid=3652 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.455000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 06:54:01.455000 audit: BPF prog-id=144 op=UNLOAD Jul 2 06:54:01.456000 audit: BPF prog-id=145 op=LOAD Jul 2 06:54:01.456000 audit[3817]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffefb9dcf60 a2=70 a3=0 items=0 ppid=3652 pid=3817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.456000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 2 06:54:01.457548 kubelet[2344]: I0702 06:54:01.456164 2344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-9fw5t" podStartSLOduration=7.196027975 podStartE2EDuration="26.456114986s" podCreationTimestamp="2024-07-02 06:53:35 +0000 UTC" firstStartedPulling="2024-07-02 06:53:37.40803074 +0000 UTC m=+25.279831972" lastFinishedPulling="2024-07-02 06:53:56.668117751 +0000 UTC m=+44.539918983" observedRunningTime="2024-07-02 06:53:57.571009606 +0000 UTC m=+45.442810828" watchObservedRunningTime="2024-07-02 06:54:01.456114986 +0000 UTC m=+49.327916218" Jul 2 06:54:01.470000 audit: BPF prog-id=145 op=UNLOAD Jul 2 06:54:01.540000 audit[3866]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3866 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:54:01.540000 audit[3866]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7fff7f0fee00 a2=0 a3=7fff7f0fedec items=0 ppid=3652 pid=3866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.540000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:54:01.547000 audit[3863]: NETFILTER_CFG table=raw:98 family=2 entries=19 op=nft_register_chain pid=3863 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:54:01.547000 audit[3863]: SYSCALL arch=c000003e syscall=46 success=yes exit=6992 a0=3 a1=7fff75e01020 a2=0 a3=7fff75e0100c items=0 ppid=3652 pid=3863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.547000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:54:01.550000 audit[3864]: NETFILTER_CFG table=nat:99 family=2 entries=15 op=nft_register_chain pid=3864 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:54:01.550000 audit[3864]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffeef05d5d0 a2=0 a3=7ffeef05d5bc items=0 ppid=3652 pid=3864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.550000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:54:01.551000 audit[3867]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=3867 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:54:01.551000 audit[3867]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7ffdc5a7a7a0 a2=0 a3=7ffdc5a7a78c items=0 ppid=3652 pid=3867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.551000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:54:01.627083 containerd[1272]: 2024-07-02 06:54:01.458 [INFO][3737] k8s.go 608: Cleaning up netns ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Jul 2 06:54:01.627083 containerd[1272]: 2024-07-02 06:54:01.458 [INFO][3737] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" iface="eth0" netns="/var/run/netns/cni-7e0594ef-10a5-318a-eb1f-ee4e9b386e77" Jul 2 06:54:01.627083 containerd[1272]: 2024-07-02 06:54:01.458 [INFO][3737] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" iface="eth0" netns="/var/run/netns/cni-7e0594ef-10a5-318a-eb1f-ee4e9b386e77" Jul 2 06:54:01.627083 containerd[1272]: 2024-07-02 06:54:01.458 [INFO][3737] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" iface="eth0" netns="/var/run/netns/cni-7e0594ef-10a5-318a-eb1f-ee4e9b386e77" Jul 2 06:54:01.627083 containerd[1272]: 2024-07-02 06:54:01.458 [INFO][3737] k8s.go 615: Releasing IP address(es) ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Jul 2 06:54:01.627083 containerd[1272]: 2024-07-02 06:54:01.459 [INFO][3737] utils.go 188: Calico CNI releasing IP address ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Jul 2 06:54:01.627083 containerd[1272]: 2024-07-02 06:54:01.615 [INFO][3818] ipam_plugin.go 411: Releasing address using handleID ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" HandleID="k8s-pod-network.833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Workload="localhost-k8s-coredns--76f75df574--z6lht-eth0" Jul 2 06:54:01.627083 containerd[1272]: 2024-07-02 06:54:01.615 [INFO][3818] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:01.627083 containerd[1272]: 2024-07-02 06:54:01.615 [INFO][3818] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:01.627083 containerd[1272]: 2024-07-02 06:54:01.622 [WARNING][3818] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" HandleID="k8s-pod-network.833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Workload="localhost-k8s-coredns--76f75df574--z6lht-eth0" Jul 2 06:54:01.627083 containerd[1272]: 2024-07-02 06:54:01.622 [INFO][3818] ipam_plugin.go 439: Releasing address using workloadID ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" HandleID="k8s-pod-network.833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Workload="localhost-k8s-coredns--76f75df574--z6lht-eth0" Jul 2 06:54:01.627083 containerd[1272]: 2024-07-02 06:54:01.623 [INFO][3818] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:01.627083 containerd[1272]: 2024-07-02 06:54:01.624 [INFO][3737] k8s.go 621: Teardown processing complete. ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Jul 2 06:54:01.627732 containerd[1272]: time="2024-07-02T06:54:01.627280663Z" level=info msg="TearDown network for sandbox \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\" successfully" Jul 2 06:54:01.627732 containerd[1272]: time="2024-07-02T06:54:01.627324518Z" level=info msg="StopPodSandbox for \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\" returns successfully" Jul 2 06:54:01.627831 kubelet[2344]: E0702 06:54:01.627650 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:54:01.629336 containerd[1272]: time="2024-07-02T06:54:01.629286205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z6lht,Uid:b7affa88-3cb8-490c-b055-74e0023a3b4f,Namespace:kube-system,Attempt:1,}" Jul 2 06:54:01.629750 systemd[1]: run-netns-cni\x2d7e0594ef\x2d10a5\x2d318a\x2deb1f\x2dee4e9b386e77.mount: Deactivated successfully. Jul 2 06:54:01.636140 containerd[1272]: 2024-07-02 06:54:01.455 [INFO][3735] k8s.go 608: Cleaning up netns ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Jul 2 06:54:01.636140 containerd[1272]: 2024-07-02 06:54:01.455 [INFO][3735] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" iface="eth0" netns="/var/run/netns/cni-5df24ba8-031c-b090-770d-ee4648442118" Jul 2 06:54:01.636140 containerd[1272]: 2024-07-02 06:54:01.455 [INFO][3735] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" iface="eth0" netns="/var/run/netns/cni-5df24ba8-031c-b090-770d-ee4648442118" Jul 2 06:54:01.636140 containerd[1272]: 2024-07-02 06:54:01.459 [INFO][3735] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" iface="eth0" netns="/var/run/netns/cni-5df24ba8-031c-b090-770d-ee4648442118" Jul 2 06:54:01.636140 containerd[1272]: 2024-07-02 06:54:01.459 [INFO][3735] k8s.go 615: Releasing IP address(es) ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Jul 2 06:54:01.636140 containerd[1272]: 2024-07-02 06:54:01.459 [INFO][3735] utils.go 188: Calico CNI releasing IP address ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Jul 2 06:54:01.636140 containerd[1272]: 2024-07-02 06:54:01.617 [INFO][3819] ipam_plugin.go 411: Releasing address using handleID ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" HandleID="k8s-pod-network.67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Workload="localhost-k8s-csi--node--driver--bdm7p-eth0" Jul 2 06:54:01.636140 containerd[1272]: 2024-07-02 06:54:01.617 [INFO][3819] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:01.636140 containerd[1272]: 2024-07-02 06:54:01.623 [INFO][3819] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:01.636140 containerd[1272]: 2024-07-02 06:54:01.630 [WARNING][3819] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" HandleID="k8s-pod-network.67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Workload="localhost-k8s-csi--node--driver--bdm7p-eth0" Jul 2 06:54:01.636140 containerd[1272]: 2024-07-02 06:54:01.630 [INFO][3819] ipam_plugin.go 439: Releasing address using workloadID ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" HandleID="k8s-pod-network.67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Workload="localhost-k8s-csi--node--driver--bdm7p-eth0" Jul 2 06:54:01.636140 containerd[1272]: 2024-07-02 06:54:01.632 [INFO][3819] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:01.636140 containerd[1272]: 2024-07-02 06:54:01.634 [INFO][3735] k8s.go 621: Teardown processing complete. ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Jul 2 06:54:01.636750 containerd[1272]: time="2024-07-02T06:54:01.636305124Z" level=info msg="TearDown network for sandbox \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\" successfully" Jul 2 06:54:01.636750 containerd[1272]: time="2024-07-02T06:54:01.636337315Z" level=info msg="StopPodSandbox for \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\" returns successfully" Jul 2 06:54:01.637430 containerd[1272]: time="2024-07-02T06:54:01.637397281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bdm7p,Uid:f47c1652-2b34-4c56-adf0-effec8bb0963,Namespace:calico-system,Attempt:1,}" Jul 2 06:54:01.640475 systemd[1]: run-netns-cni\x2d5df24ba8\x2d031c\x2db090\x2d770d\x2dee4648442118.mount: Deactivated successfully. Jul 2 06:54:01.647172 containerd[1272]: 2024-07-02 06:54:01.514 [INFO][3734] k8s.go 608: Cleaning up netns ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Jul 2 06:54:01.647172 containerd[1272]: 2024-07-02 06:54:01.514 [INFO][3734] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" iface="eth0" netns="/var/run/netns/cni-b14d36eb-1065-40b6-df50-c00d594312c2" Jul 2 06:54:01.647172 containerd[1272]: 2024-07-02 06:54:01.514 [INFO][3734] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" iface="eth0" netns="/var/run/netns/cni-b14d36eb-1065-40b6-df50-c00d594312c2" Jul 2 06:54:01.647172 containerd[1272]: 2024-07-02 06:54:01.514 [INFO][3734] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" iface="eth0" netns="/var/run/netns/cni-b14d36eb-1065-40b6-df50-c00d594312c2" Jul 2 06:54:01.647172 containerd[1272]: 2024-07-02 06:54:01.514 [INFO][3734] k8s.go 615: Releasing IP address(es) ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Jul 2 06:54:01.647172 containerd[1272]: 2024-07-02 06:54:01.514 [INFO][3734] utils.go 188: Calico CNI releasing IP address ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Jul 2 06:54:01.647172 containerd[1272]: 2024-07-02 06:54:01.619 [INFO][3847] ipam_plugin.go 411: Releasing address using handleID ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" HandleID="k8s-pod-network.8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Workload="localhost-k8s-coredns--76f75df574--st82c-eth0" Jul 2 06:54:01.647172 containerd[1272]: 2024-07-02 06:54:01.619 [INFO][3847] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:01.647172 containerd[1272]: 2024-07-02 06:54:01.632 [INFO][3847] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:01.647172 containerd[1272]: 2024-07-02 06:54:01.641 [WARNING][3847] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" HandleID="k8s-pod-network.8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Workload="localhost-k8s-coredns--76f75df574--st82c-eth0" Jul 2 06:54:01.647172 containerd[1272]: 2024-07-02 06:54:01.641 [INFO][3847] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" HandleID="k8s-pod-network.8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Workload="localhost-k8s-coredns--76f75df574--st82c-eth0" Jul 2 06:54:01.647172 containerd[1272]: 2024-07-02 06:54:01.643 [INFO][3847] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:01.647172 containerd[1272]: 2024-07-02 06:54:01.645 [INFO][3734] k8s.go 621: Teardown processing complete. ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Jul 2 06:54:01.647570 containerd[1272]: time="2024-07-02T06:54:01.647327362Z" level=info msg="TearDown network for sandbox \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\" successfully" Jul 2 06:54:01.647570 containerd[1272]: time="2024-07-02T06:54:01.647356958Z" level=info msg="StopPodSandbox for \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\" returns successfully" Jul 2 06:54:01.647835 kubelet[2344]: E0702 06:54:01.647812 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:54:01.648549 containerd[1272]: time="2024-07-02T06:54:01.648520130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-st82c,Uid:d461e7f3-5eb3-4f3a-bd28-01c916db29c2,Namespace:kube-system,Attempt:1,}" Jul 2 06:54:01.649479 systemd[1]: run-netns-cni\x2db14d36eb\x2d1065\x2d40b6\x2ddf50\x2dc00d594312c2.mount: Deactivated successfully. Jul 2 06:54:01.896517 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali899dc11f37a: link becomes ready Jul 2 06:54:01.897414 systemd-networkd[1104]: cali899dc11f37a: Link UP Jul 2 06:54:01.897611 systemd-networkd[1104]: cali899dc11f37a: Gained carrier Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.798 [INFO][3879] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--bdm7p-eth0 csi-node-driver- calico-system f47c1652-2b34-4c56-adf0-effec8bb0963 863 0 2024-07-02 06:53:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s localhost csi-node-driver-bdm7p eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali899dc11f37a [] []}} ContainerID="82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" Namespace="calico-system" Pod="csi-node-driver-bdm7p" WorkloadEndpoint="localhost-k8s-csi--node--driver--bdm7p-" Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.799 [INFO][3879] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" Namespace="calico-system" Pod="csi-node-driver-bdm7p" WorkloadEndpoint="localhost-k8s-csi--node--driver--bdm7p-eth0" Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.847 [INFO][3920] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" HandleID="k8s-pod-network.82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" Workload="localhost-k8s-csi--node--driver--bdm7p-eth0" Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.859 [INFO][3920] ipam_plugin.go 264: Auto assigning IP ContainerID="82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" HandleID="k8s-pod-network.82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" Workload="localhost-k8s-csi--node--driver--bdm7p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000631da0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-bdm7p", "timestamp":"2024-07-02 06:54:01.847395705 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.859 [INFO][3920] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.859 [INFO][3920] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.859 [INFO][3920] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.861 [INFO][3920] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" host="localhost" Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.868 [INFO][3920] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.873 [INFO][3920] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.875 [INFO][3920] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.877 [INFO][3920] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.877 [INFO][3920] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" host="localhost" Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.879 [INFO][3920] ipam.go 1685: Creating new handle: k8s-pod-network.82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62 Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.883 [INFO][3920] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" host="localhost" Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.887 [INFO][3920] ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" host="localhost" Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.888 [INFO][3920] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" host="localhost" Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.888 [INFO][3920] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:01.910567 containerd[1272]: 2024-07-02 06:54:01.888 [INFO][3920] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" HandleID="k8s-pod-network.82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" Workload="localhost-k8s-csi--node--driver--bdm7p-eth0" Jul 2 06:54:01.911523 containerd[1272]: 2024-07-02 06:54:01.892 [INFO][3879] k8s.go 386: Populated endpoint ContainerID="82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" Namespace="calico-system" Pod="csi-node-driver-bdm7p" WorkloadEndpoint="localhost-k8s-csi--node--driver--bdm7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bdm7p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f47c1652-2b34-4c56-adf0-effec8bb0963", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-bdm7p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali899dc11f37a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:01.911523 containerd[1272]: 2024-07-02 06:54:01.892 [INFO][3879] k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" Namespace="calico-system" Pod="csi-node-driver-bdm7p" WorkloadEndpoint="localhost-k8s-csi--node--driver--bdm7p-eth0" Jul 2 06:54:01.911523 containerd[1272]: 2024-07-02 06:54:01.892 [INFO][3879] dataplane_linux.go 68: Setting the host side veth name to cali899dc11f37a ContainerID="82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" Namespace="calico-system" Pod="csi-node-driver-bdm7p" WorkloadEndpoint="localhost-k8s-csi--node--driver--bdm7p-eth0" Jul 2 06:54:01.911523 containerd[1272]: 2024-07-02 06:54:01.895 [INFO][3879] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" Namespace="calico-system" Pod="csi-node-driver-bdm7p" WorkloadEndpoint="localhost-k8s-csi--node--driver--bdm7p-eth0" Jul 2 06:54:01.911523 containerd[1272]: 2024-07-02 06:54:01.896 [INFO][3879] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" Namespace="calico-system" Pod="csi-node-driver-bdm7p" WorkloadEndpoint="localhost-k8s-csi--node--driver--bdm7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bdm7p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f47c1652-2b34-4c56-adf0-effec8bb0963", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62", Pod:"csi-node-driver-bdm7p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali899dc11f37a", MAC:"fa:a5:f2:45:9d:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:01.911523 containerd[1272]: 2024-07-02 06:54:01.908 [INFO][3879] k8s.go 500: Wrote updated endpoint to datastore ContainerID="82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62" Namespace="calico-system" Pod="csi-node-driver-bdm7p" WorkloadEndpoint="localhost-k8s-csi--node--driver--bdm7p-eth0" Jul 2 06:54:01.935000 audit[3956]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3956 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:54:01.937031 systemd-networkd[1104]: cali4370af552be: Link UP Jul 2 06:54:01.938737 systemd-networkd[1104]: cali4370af552be: Gained carrier Jul 2 06:54:01.938832 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4370af552be: link becomes ready Jul 2 06:54:01.935000 audit[3956]: SYSCALL arch=c000003e syscall=46 success=yes exit=19148 a0=3 a1=7fffa3978e00 a2=0 a3=7fffa3978dec items=0 ppid=3652 pid=3956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.935000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.817 [INFO][3901] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--z6lht-eth0 coredns-76f75df574- kube-system b7affa88-3cb8-490c-b055-74e0023a3b4f 862 0 2024-07-02 06:53:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-z6lht eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4370af552be [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" Namespace="kube-system" Pod="coredns-76f75df574-z6lht" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z6lht-" Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.818 [INFO][3901] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" Namespace="kube-system" Pod="coredns-76f75df574-z6lht" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z6lht-eth0" Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.867 [INFO][3925] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" HandleID="k8s-pod-network.63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" Workload="localhost-k8s-coredns--76f75df574--z6lht-eth0" Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.879 [INFO][3925] ipam_plugin.go 264: Auto assigning IP ContainerID="63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" HandleID="k8s-pod-network.63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" Workload="localhost-k8s-coredns--76f75df574--z6lht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002a1de0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-z6lht", "timestamp":"2024-07-02 06:54:01.86771274 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.879 [INFO][3925] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.888 [INFO][3925] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.888 [INFO][3925] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.893 [INFO][3925] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" host="localhost" Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.902 [INFO][3925] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.912 [INFO][3925] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.914 [INFO][3925] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.916 [INFO][3925] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.916 [INFO][3925] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" host="localhost" Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.919 [INFO][3925] ipam.go 1685: Creating new handle: k8s-pod-network.63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986 Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.924 [INFO][3925] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" host="localhost" Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.929 [INFO][3925] ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" host="localhost" Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.929 [INFO][3925] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" host="localhost" Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.929 [INFO][3925] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:01.951287 containerd[1272]: 2024-07-02 06:54:01.929 [INFO][3925] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" HandleID="k8s-pod-network.63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" Workload="localhost-k8s-coredns--76f75df574--z6lht-eth0" Jul 2 06:54:01.952114 containerd[1272]: 2024-07-02 06:54:01.932 [INFO][3901] k8s.go 386: Populated endpoint ContainerID="63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" Namespace="kube-system" Pod="coredns-76f75df574-z6lht" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z6lht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--z6lht-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b7affa88-3cb8-490c-b055-74e0023a3b4f", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 53, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-z6lht", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4370af552be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:01.952114 containerd[1272]: 2024-07-02 06:54:01.932 [INFO][3901] k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" Namespace="kube-system" Pod="coredns-76f75df574-z6lht" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z6lht-eth0" Jul 2 06:54:01.952114 containerd[1272]: 2024-07-02 06:54:01.932 [INFO][3901] dataplane_linux.go 68: Setting the host side veth name to cali4370af552be ContainerID="63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" Namespace="kube-system" Pod="coredns-76f75df574-z6lht" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z6lht-eth0" Jul 2 06:54:01.952114 containerd[1272]: 2024-07-02 06:54:01.939 [INFO][3901] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" Namespace="kube-system" Pod="coredns-76f75df574-z6lht" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z6lht-eth0" Jul 2 06:54:01.952114 containerd[1272]: 2024-07-02 06:54:01.939 [INFO][3901] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" Namespace="kube-system" Pod="coredns-76f75df574-z6lht" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z6lht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--z6lht-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b7affa88-3cb8-490c-b055-74e0023a3b4f", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 53, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986", Pod:"coredns-76f75df574-z6lht", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4370af552be", MAC:"06:28:5c:70:5c:75", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:01.952114 containerd[1272]: 2024-07-02 06:54:01.948 [INFO][3901] k8s.go 500: Wrote updated endpoint to datastore ContainerID="63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986" Namespace="kube-system" Pod="coredns-76f75df574-z6lht" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--z6lht-eth0" Jul 2 06:54:01.979005 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3218e1c2236: link becomes ready Jul 2 06:54:01.978368 systemd-networkd[1104]: cali3218e1c2236: Link UP Jul 2 06:54:01.978553 systemd-networkd[1104]: cali3218e1c2236: Gained carrier Jul 2 06:54:01.978000 audit[3984]: NETFILTER_CFG table=filter:102 family=2 entries=38 op=nft_register_chain pid=3984 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:54:01.978000 audit[3984]: SYSCALL arch=c000003e syscall=46 success=yes exit=20336 a0=3 a1=7ffc15c675a0 a2=0 a3=7ffc15c6758c items=0 ppid=3652 pid=3984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:01.978000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:54:01.989335 containerd[1272]: time="2024-07-02T06:54:01.989125940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:54:01.989335 containerd[1272]: time="2024-07-02T06:54:01.989205422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:54:01.989335 containerd[1272]: time="2024-07-02T06:54:01.989237282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:54:01.989335 containerd[1272]: time="2024-07-02T06:54:01.989258072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.826 [INFO][3888] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--st82c-eth0 coredns-76f75df574- kube-system d461e7f3-5eb3-4f3a-bd28-01c916db29c2 864 0 2024-07-02 06:53:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-st82c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3218e1c2236 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" Namespace="kube-system" Pod="coredns-76f75df574-st82c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--st82c-" Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.826 [INFO][3888] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" Namespace="kube-system" Pod="coredns-76f75df574-st82c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--st82c-eth0" Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.879 [INFO][3932] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" HandleID="k8s-pod-network.bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" Workload="localhost-k8s-coredns--76f75df574--st82c-eth0" Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.898 [INFO][3932] ipam_plugin.go 264: Auto assigning IP ContainerID="bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" HandleID="k8s-pod-network.bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" Workload="localhost-k8s-coredns--76f75df574--st82c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027d8d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-st82c", "timestamp":"2024-07-02 06:54:01.879687056 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.898 [INFO][3932] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.929 [INFO][3932] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.929 [INFO][3932] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.932 [INFO][3932] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" host="localhost" Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.939 [INFO][3932] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.949 [INFO][3932] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.952 [INFO][3932] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.955 [INFO][3932] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.955 [INFO][3932] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" host="localhost" Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.958 [INFO][3932] ipam.go 1685: Creating new handle: k8s-pod-network.bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637 Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.962 [INFO][3932] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" host="localhost" Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.968 [INFO][3932] ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" host="localhost" Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.968 [INFO][3932] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" host="localhost" Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.968 [INFO][3932] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:02.003909 containerd[1272]: 2024-07-02 06:54:01.968 [INFO][3932] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" HandleID="k8s-pod-network.bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" Workload="localhost-k8s-coredns--76f75df574--st82c-eth0" Jul 2 06:54:02.004575 containerd[1272]: 2024-07-02 06:54:01.971 [INFO][3888] k8s.go 386: Populated endpoint ContainerID="bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" Namespace="kube-system" Pod="coredns-76f75df574-st82c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--st82c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--st82c-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d461e7f3-5eb3-4f3a-bd28-01c916db29c2", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 53, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-st82c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3218e1c2236", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:02.004575 containerd[1272]: 2024-07-02 06:54:01.971 [INFO][3888] k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" Namespace="kube-system" Pod="coredns-76f75df574-st82c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--st82c-eth0" Jul 2 06:54:02.004575 containerd[1272]: 2024-07-02 06:54:01.971 [INFO][3888] dataplane_linux.go 68: Setting the host side veth name to cali3218e1c2236 ContainerID="bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" Namespace="kube-system" Pod="coredns-76f75df574-st82c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--st82c-eth0" Jul 2 06:54:02.004575 containerd[1272]: 2024-07-02 06:54:01.981 [INFO][3888] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" Namespace="kube-system" Pod="coredns-76f75df574-st82c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--st82c-eth0" Jul 2 06:54:02.004575 containerd[1272]: 2024-07-02 06:54:01.987 [INFO][3888] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" Namespace="kube-system" Pod="coredns-76f75df574-st82c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--st82c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--st82c-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d461e7f3-5eb3-4f3a-bd28-01c916db29c2", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 53, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637", Pod:"coredns-76f75df574-st82c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3218e1c2236", MAC:"fa:c9:55:bd:c1:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:02.004575 containerd[1272]: 2024-07-02 06:54:01.996 [INFO][3888] k8s.go 500: Wrote updated endpoint to datastore ContainerID="bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637" Namespace="kube-system" Pod="coredns-76f75df574-st82c" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--st82c-eth0" Jul 2 06:54:02.011710 containerd[1272]: time="2024-07-02T06:54:02.011343946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:54:02.011710 containerd[1272]: time="2024-07-02T06:54:02.011489774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:54:02.011710 containerd[1272]: time="2024-07-02T06:54:02.011529060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:54:02.012179 containerd[1272]: time="2024-07-02T06:54:02.012133764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:54:02.015995 systemd[1]: Started cri-containerd-82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62.scope - libcontainer container 82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62. Jul 2 06:54:02.020000 audit[4036]: NETFILTER_CFG table=filter:103 family=2 entries=34 op=nft_register_chain pid=4036 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:54:02.020000 audit[4036]: SYSCALL arch=c000003e syscall=46 success=yes exit=18220 a0=3 a1=7ffcd9b52cb0 a2=0 a3=7ffcd9b52c9c items=0 ppid=3652 pid=4036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.020000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:54:02.035057 systemd[1]: Started cri-containerd-63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986.scope - libcontainer container 63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986. Jul 2 06:54:02.037000 audit: BPF prog-id=146 op=LOAD Jul 2 06:54:02.037000 audit: BPF prog-id=147 op=LOAD Jul 2 06:54:02.037000 audit[4003]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3972 pid=4003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.037000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832646661643031356134346138383436366632373831633861633834 Jul 2 06:54:02.038000 audit: BPF prog-id=148 op=LOAD Jul 2 06:54:02.038000 audit[4003]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3972 pid=4003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832646661643031356134346138383436366632373831633861633834 Jul 2 06:54:02.038000 audit: BPF prog-id=148 op=UNLOAD Jul 2 06:54:02.038000 audit: BPF prog-id=147 op=UNLOAD Jul 2 06:54:02.038000 audit: BPF prog-id=149 op=LOAD Jul 2 06:54:02.038000 audit[4003]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3972 pid=4003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.038000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3832646661643031356134346138383436366632373831633861633834 Jul 2 06:54:02.041254 systemd-resolved[1215]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 06:54:02.048865 containerd[1272]: time="2024-07-02T06:54:02.047717884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:54:02.048865 containerd[1272]: time="2024-07-02T06:54:02.047846960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:54:02.048865 containerd[1272]: time="2024-07-02T06:54:02.047884691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:54:02.048865 containerd[1272]: time="2024-07-02T06:54:02.047911673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:54:02.056000 audit: BPF prog-id=150 op=LOAD Jul 2 06:54:02.057000 audit: BPF prog-id=151 op=LOAD Jul 2 06:54:02.057000 audit[4035]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131988 a2=78 a3=0 items=0 ppid=4002 pid=4035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633666162636465616633636233653464383264653035393763666333 Jul 2 06:54:02.057000 audit: BPF prog-id=152 op=LOAD Jul 2 06:54:02.057000 audit[4035]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000131720 a2=78 a3=0 items=0 ppid=4002 pid=4035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.057000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633666162636465616633636233653464383264653035393763666333 Jul 2 06:54:02.058000 audit: BPF prog-id=152 op=UNLOAD Jul 2 06:54:02.058000 audit: BPF prog-id=151 op=UNLOAD Jul 2 06:54:02.058000 audit: BPF prog-id=153 op=LOAD Jul 2 06:54:02.058000 audit[4035]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000131be0 a2=78 a3=0 items=0 ppid=4002 pid=4035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.058000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633666162636465616633636233653464383264653035393763666333 Jul 2 06:54:02.061321 systemd-resolved[1215]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 06:54:02.084028 systemd[1]: Started cri-containerd-bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637.scope - libcontainer container bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637. Jul 2 06:54:02.085467 containerd[1272]: time="2024-07-02T06:54:02.085420799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bdm7p,Uid:f47c1652-2b34-4c56-adf0-effec8bb0963,Namespace:calico-system,Attempt:1,} returns sandbox id \"82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62\"" Jul 2 06:54:02.086985 containerd[1272]: time="2024-07-02T06:54:02.086955258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 06:54:02.099000 audit: BPF prog-id=154 op=LOAD Jul 2 06:54:02.099000 audit: BPF prog-id=155 op=LOAD Jul 2 06:54:02.099000 audit[4077]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=4063 pid=4077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262653566656537663366363162666263323536333937313663326462 Jul 2 06:54:02.099000 audit: BPF prog-id=156 op=LOAD Jul 2 06:54:02.099000 audit[4077]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=4063 pid=4077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262653566656537663366363162666263323536333937313663326462 Jul 2 06:54:02.099000 audit: BPF prog-id=156 op=UNLOAD Jul 2 06:54:02.099000 audit: BPF prog-id=155 op=UNLOAD Jul 2 06:54:02.099000 audit: BPF prog-id=157 op=LOAD Jul 2 06:54:02.099000 audit[4077]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=4063 pid=4077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.099000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262653566656537663366363162666263323536333937313663326462 Jul 2 06:54:02.101100 systemd-resolved[1215]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 06:54:02.105248 containerd[1272]: time="2024-07-02T06:54:02.104693070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-z6lht,Uid:b7affa88-3cb8-490c-b055-74e0023a3b4f,Namespace:kube-system,Attempt:1,} returns sandbox id \"63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986\"" Jul 2 06:54:02.105531 kubelet[2344]: E0702 06:54:02.105477 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:54:02.111859 containerd[1272]: time="2024-07-02T06:54:02.108446296Z" level=info msg="CreateContainer within sandbox \"63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 06:54:02.130364 containerd[1272]: time="2024-07-02T06:54:02.130309926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-st82c,Uid:d461e7f3-5eb3-4f3a-bd28-01c916db29c2,Namespace:kube-system,Attempt:1,} returns sandbox id \"bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637\"" Jul 2 06:54:02.131415 kubelet[2344]: E0702 06:54:02.131395 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:54:02.141303 containerd[1272]: time="2024-07-02T06:54:02.141249264Z" level=info msg="CreateContainer within sandbox \"bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 06:54:02.141728 containerd[1272]: time="2024-07-02T06:54:02.141680248Z" level=info msg="CreateContainer within sandbox \"63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2e95ecde48988b9096486f888f0b076a2fed6b60c4d8e3f83c418529ffe5feb\"" Jul 2 06:54:02.142228 containerd[1272]: time="2024-07-02T06:54:02.142203817Z" level=info msg="StartContainer for \"d2e95ecde48988b9096486f888f0b076a2fed6b60c4d8e3f83c418529ffe5feb\"" Jul 2 06:54:02.160718 containerd[1272]: time="2024-07-02T06:54:02.160530783Z" level=info msg="CreateContainer within sandbox \"bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2837bedf1f98f46dc681503baabf35f30ddd664b04029f3cc38e6eb200e0998d\"" Jul 2 06:54:02.161699 containerd[1272]: time="2024-07-02T06:54:02.161678806Z" level=info msg="StartContainer for \"2837bedf1f98f46dc681503baabf35f30ddd664b04029f3cc38e6eb200e0998d\"" Jul 2 06:54:02.172955 systemd[1]: Started cri-containerd-d2e95ecde48988b9096486f888f0b076a2fed6b60c4d8e3f83c418529ffe5feb.scope - libcontainer container d2e95ecde48988b9096486f888f0b076a2fed6b60c4d8e3f83c418529ffe5feb. Jul 2 06:54:02.188000 audit: BPF prog-id=158 op=LOAD Jul 2 06:54:02.188000 audit: BPF prog-id=159 op=LOAD Jul 2 06:54:02.188000 audit[4125]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4002 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432653935656364653438393838623930393634383666383838663062 Jul 2 06:54:02.188000 audit: BPF prog-id=160 op=LOAD Jul 2 06:54:02.188000 audit[4125]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4002 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432653935656364653438393838623930393634383666383838663062 Jul 2 06:54:02.188000 audit: BPF prog-id=160 op=UNLOAD Jul 2 06:54:02.188000 audit: BPF prog-id=159 op=UNLOAD Jul 2 06:54:02.188000 audit: BPF prog-id=161 op=LOAD Jul 2 06:54:02.188000 audit[4125]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4002 pid=4125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.188000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6432653935656364653438393838623930393634383666383838663062 Jul 2 06:54:02.195903 systemd[1]: Started cri-containerd-2837bedf1f98f46dc681503baabf35f30ddd664b04029f3cc38e6eb200e0998d.scope - libcontainer container 2837bedf1f98f46dc681503baabf35f30ddd664b04029f3cc38e6eb200e0998d. Jul 2 06:54:02.209000 audit: BPF prog-id=162 op=LOAD Jul 2 06:54:02.210000 audit: BPF prog-id=163 op=LOAD Jul 2 06:54:02.210000 audit[4152]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4063 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238333762656466316639386634366463363831353033626161626633 Jul 2 06:54:02.210000 audit: BPF prog-id=164 op=LOAD Jul 2 06:54:02.210000 audit[4152]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4063 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.210000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238333762656466316639386634366463363831353033626161626633 Jul 2 06:54:02.211000 audit: BPF prog-id=164 op=UNLOAD Jul 2 06:54:02.211000 audit: BPF prog-id=163 op=UNLOAD Jul 2 06:54:02.212801 containerd[1272]: time="2024-07-02T06:54:02.212742583Z" level=info msg="StartContainer for \"d2e95ecde48988b9096486f888f0b076a2fed6b60c4d8e3f83c418529ffe5feb\" returns successfully" Jul 2 06:54:02.211000 audit: BPF prog-id=165 op=LOAD Jul 2 06:54:02.211000 audit[4152]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4063 pid=4152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.211000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3238333762656466316639386634366463363831353033626161626633 Jul 2 06:54:02.239061 containerd[1272]: time="2024-07-02T06:54:02.238991316Z" level=info msg="StartContainer for \"2837bedf1f98f46dc681503baabf35f30ddd664b04029f3cc38e6eb200e0998d\" returns successfully" Jul 2 06:54:02.470058 kubelet[2344]: E0702 06:54:02.469919 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:54:02.479040 kubelet[2344]: E0702 06:54:02.478998 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:54:02.495052 kubelet[2344]: I0702 06:54:02.494998 2344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-z6lht" podStartSLOduration=37.494946662 podStartE2EDuration="37.494946662s" podCreationTimestamp="2024-07-02 06:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:54:02.482081725 +0000 UTC m=+50.353882947" watchObservedRunningTime="2024-07-02 06:54:02.494946662 +0000 UTC m=+50.366747884" Jul 2 06:54:02.508000 audit[4202]: NETFILTER_CFG table=filter:104 family=2 entries=14 op=nft_register_rule pid=4202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:02.508000 audit[4202]: SYSCALL arch=c000003e syscall=46 success=yes exit=5164 a0=3 a1=7fff03104650 a2=0 a3=7fff0310463c items=0 ppid=2535 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.508000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:02.510000 audit[4202]: NETFILTER_CFG table=nat:105 family=2 entries=14 op=nft_register_rule pid=4202 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:02.510000 audit[4202]: SYSCALL arch=c000003e syscall=46 success=yes exit=3468 a0=3 a1=7fff03104650 a2=0 a3=0 items=0 ppid=2535 pid=4202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.510000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:02.521000 audit[4204]: NETFILTER_CFG table=filter:106 family=2 entries=11 op=nft_register_rule pid=4204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:02.521000 audit[4204]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe0d519a30 a2=0 a3=7ffe0d519a1c items=0 ppid=2535 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.521000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:02.524000 audit[4204]: NETFILTER_CFG table=nat:107 family=2 entries=35 op=nft_register_chain pid=4204 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:02.524000 audit[4204]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffe0d519a30 a2=0 a3=7ffe0d519a1c items=0 ppid=2535 pid=4204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:02.524000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:03.219588 containerd[1272]: time="2024-07-02T06:54:03.219522334Z" level=info msg="StopPodSandbox for \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\"" Jul 2 06:54:03.392556 kubelet[2344]: I0702 06:54:03.392487 2344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-st82c" podStartSLOduration=38.392435519 podStartE2EDuration="38.392435519s" podCreationTimestamp="2024-07-02 06:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 06:54:02.508950953 +0000 UTC m=+50.380752175" watchObservedRunningTime="2024-07-02 06:54:03.392435519 +0000 UTC m=+51.264236752" Jul 2 06:54:03.410425 systemd-networkd[1104]: cali4370af552be: Gained IPv6LL Jul 2 06:54:03.435193 containerd[1272]: 2024-07-02 06:54:03.393 [INFO][4223] k8s.go 608: Cleaning up netns ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Jul 2 06:54:03.435193 containerd[1272]: 2024-07-02 06:54:03.393 [INFO][4223] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" iface="eth0" netns="/var/run/netns/cni-dfcd759e-77e5-1a3e-a46b-4b1193edc2ab" Jul 2 06:54:03.435193 containerd[1272]: 2024-07-02 06:54:03.393 [INFO][4223] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" iface="eth0" netns="/var/run/netns/cni-dfcd759e-77e5-1a3e-a46b-4b1193edc2ab" Jul 2 06:54:03.435193 containerd[1272]: 2024-07-02 06:54:03.394 [INFO][4223] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" iface="eth0" netns="/var/run/netns/cni-dfcd759e-77e5-1a3e-a46b-4b1193edc2ab" Jul 2 06:54:03.435193 containerd[1272]: 2024-07-02 06:54:03.394 [INFO][4223] k8s.go 615: Releasing IP address(es) ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Jul 2 06:54:03.435193 containerd[1272]: 2024-07-02 06:54:03.394 [INFO][4223] utils.go 188: Calico CNI releasing IP address ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Jul 2 06:54:03.435193 containerd[1272]: 2024-07-02 06:54:03.423 [INFO][4230] ipam_plugin.go 411: Releasing address using handleID ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" HandleID="k8s-pod-network.ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Workload="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" Jul 2 06:54:03.435193 containerd[1272]: 2024-07-02 06:54:03.423 [INFO][4230] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:03.435193 containerd[1272]: 2024-07-02 06:54:03.423 [INFO][4230] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:03.435193 containerd[1272]: 2024-07-02 06:54:03.429 [WARNING][4230] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" HandleID="k8s-pod-network.ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Workload="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" Jul 2 06:54:03.435193 containerd[1272]: 2024-07-02 06:54:03.429 [INFO][4230] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" HandleID="k8s-pod-network.ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Workload="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" Jul 2 06:54:03.435193 containerd[1272]: 2024-07-02 06:54:03.432 [INFO][4230] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:03.435193 containerd[1272]: 2024-07-02 06:54:03.433 [INFO][4223] k8s.go 621: Teardown processing complete. ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Jul 2 06:54:03.438912 containerd[1272]: time="2024-07-02T06:54:03.438795014Z" level=info msg="TearDown network for sandbox \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\" successfully" Jul 2 06:54:03.438912 containerd[1272]: time="2024-07-02T06:54:03.438851272Z" level=info msg="StopPodSandbox for \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\" returns successfully" Jul 2 06:54:03.439444 containerd[1272]: time="2024-07-02T06:54:03.439411451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bf5c69fc4-6jthk,Uid:d5281e8f-feca-4ab3-9b29-c3d038aed0d0,Namespace:calico-system,Attempt:1,}" Jul 2 06:54:03.440324 systemd[1]: run-netns-cni\x2ddfcd759e\x2d77e5\x2d1a3e\x2da46b\x2d4b1193edc2ab.mount: Deactivated successfully. Jul 2 06:54:03.473931 systemd-networkd[1104]: vxlan.calico: Gained IPv6LL Jul 2 06:54:03.480978 kubelet[2344]: E0702 06:54:03.480941 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:54:03.481389 kubelet[2344]: E0702 06:54:03.481111 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:54:03.666041 systemd-networkd[1104]: cali3218e1c2236: Gained IPv6LL Jul 2 06:54:03.731685 systemd-networkd[1104]: cali899dc11f37a: Gained IPv6LL Jul 2 06:54:03.753142 systemd-networkd[1104]: cali2cd9f6fd9ad: Link UP Jul 2 06:54:03.754877 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 06:54:03.754928 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2cd9f6fd9ad: link becomes ready Jul 2 06:54:03.755029 systemd-networkd[1104]: cali2cd9f6fd9ad: Gained carrier Jul 2 06:54:03.789000 audit[4261]: NETFILTER_CFG table=filter:108 family=2 entries=8 op=nft_register_rule pid=4261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:03.789000 audit[4261]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffe692bf680 a2=0 a3=7ffe692bf66c items=0 ppid=2535 pid=4261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:03.789000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.548 [INFO][4237] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0 calico-kube-controllers-7bf5c69fc4- calico-system d5281e8f-feca-4ab3-9b29-c3d038aed0d0 905 0 2024-07-02 06:53:31 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bf5c69fc4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7bf5c69fc4-6jthk eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali2cd9f6fd9ad [] []}} ContainerID="fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" Namespace="calico-system" Pod="calico-kube-controllers-7bf5c69fc4-6jthk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-" Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.548 [INFO][4237] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" Namespace="calico-system" Pod="calico-kube-controllers-7bf5c69fc4-6jthk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.594 [INFO][4250] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" HandleID="k8s-pod-network.fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" Workload="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.716 [INFO][4250] ipam_plugin.go 264: Auto assigning IP ContainerID="fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" HandleID="k8s-pod-network.fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" Workload="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000281dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7bf5c69fc4-6jthk", "timestamp":"2024-07-02 06:54:03.594603913 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.716 [INFO][4250] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.716 [INFO][4250] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.716 [INFO][4250] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.718 [INFO][4250] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" host="localhost" Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.721 [INFO][4250] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.726 [INFO][4250] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.728 [INFO][4250] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.731 [INFO][4250] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.731 [INFO][4250] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" host="localhost" Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.733 [INFO][4250] ipam.go 1685: Creating new handle: k8s-pod-network.fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38 Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.736 [INFO][4250] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" host="localhost" Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.749 [INFO][4250] ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" host="localhost" Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.749 [INFO][4250] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" host="localhost" Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.749 [INFO][4250] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:03.810539 containerd[1272]: 2024-07-02 06:54:03.749 [INFO][4250] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" HandleID="k8s-pod-network.fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" Workload="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" Jul 2 06:54:03.811301 containerd[1272]: 2024-07-02 06:54:03.751 [INFO][4237] k8s.go 386: Populated endpoint ContainerID="fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" Namespace="calico-system" Pod="calico-kube-controllers-7bf5c69fc4-6jthk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0", GenerateName:"calico-kube-controllers-7bf5c69fc4-", Namespace:"calico-system", SelfLink:"", UID:"d5281e8f-feca-4ab3-9b29-c3d038aed0d0", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bf5c69fc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7bf5c69fc4-6jthk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2cd9f6fd9ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:03.811301 containerd[1272]: 2024-07-02 06:54:03.751 [INFO][4237] k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" Namespace="calico-system" Pod="calico-kube-controllers-7bf5c69fc4-6jthk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" Jul 2 06:54:03.811301 containerd[1272]: 2024-07-02 06:54:03.751 [INFO][4237] dataplane_linux.go 68: Setting the host side veth name to cali2cd9f6fd9ad ContainerID="fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" Namespace="calico-system" Pod="calico-kube-controllers-7bf5c69fc4-6jthk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" Jul 2 06:54:03.811301 containerd[1272]: 2024-07-02 06:54:03.755 [INFO][4237] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" Namespace="calico-system" Pod="calico-kube-controllers-7bf5c69fc4-6jthk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" Jul 2 06:54:03.811301 containerd[1272]: 2024-07-02 06:54:03.755 [INFO][4237] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" Namespace="calico-system" Pod="calico-kube-controllers-7bf5c69fc4-6jthk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0", GenerateName:"calico-kube-controllers-7bf5c69fc4-", Namespace:"calico-system", SelfLink:"", UID:"d5281e8f-feca-4ab3-9b29-c3d038aed0d0", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bf5c69fc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38", Pod:"calico-kube-controllers-7bf5c69fc4-6jthk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2cd9f6fd9ad", MAC:"ee:9b:58:8c:3d:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:03.811301 containerd[1272]: 2024-07-02 06:54:03.808 [INFO][4237] k8s.go 500: Wrote updated endpoint to datastore ContainerID="fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38" Namespace="calico-system" Pod="calico-kube-controllers-7bf5c69fc4-6jthk" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" Jul 2 06:54:03.820000 audit[4274]: NETFILTER_CFG table=filter:109 family=2 entries=42 op=nft_register_chain pid=4274 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:54:03.820000 audit[4274]: SYSCALL arch=c000003e syscall=46 success=yes exit=21016 a0=3 a1=7ffffae52ad0 a2=0 a3=7ffffae52abc items=0 ppid=3652 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:03.820000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:54:03.837000 audit[4261]: NETFILTER_CFG table=nat:110 family=2 entries=56 op=nft_register_chain pid=4261 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:03.837000 audit[4261]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe692bf680 a2=0 a3=7ffe692bf66c items=0 ppid=2535 pid=4261 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:03.837000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:03.895452 containerd[1272]: time="2024-07-02T06:54:03.895336734Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:54:03.895452 containerd[1272]: time="2024-07-02T06:54:03.895412068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:54:03.895659 containerd[1272]: time="2024-07-02T06:54:03.895437446Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:54:03.895659 containerd[1272]: time="2024-07-02T06:54:03.895450681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:54:03.927032 systemd[1]: Started cri-containerd-fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38.scope - libcontainer container fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38. Jul 2 06:54:03.939000 audit: BPF prog-id=166 op=LOAD Jul 2 06:54:03.940000 audit: BPF prog-id=167 op=LOAD Jul 2 06:54:03.940000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=4284 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:03.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665353962323664663461626639356563663763363533613234633263 Jul 2 06:54:03.940000 audit: BPF prog-id=168 op=LOAD Jul 2 06:54:03.940000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=4284 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:03.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665353962323664663461626639356563663763363533613234633263 Jul 2 06:54:03.940000 audit: BPF prog-id=168 op=UNLOAD Jul 2 06:54:03.940000 audit: BPF prog-id=167 op=UNLOAD Jul 2 06:54:03.940000 audit: BPF prog-id=169 op=LOAD Jul 2 06:54:03.940000 audit[4294]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=4284 pid=4294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:03.940000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6665353962323664663461626639356563663763363533613234633263 Jul 2 06:54:03.942011 systemd-resolved[1215]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 06:54:03.967090 containerd[1272]: time="2024-07-02T06:54:03.967037517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bf5c69fc4-6jthk,Uid:d5281e8f-feca-4ab3-9b29-c3d038aed0d0,Namespace:calico-system,Attempt:1,} returns sandbox id \"fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38\"" Jul 2 06:54:04.484290 kubelet[2344]: E0702 06:54:04.484255 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:54:04.484816 kubelet[2344]: E0702 06:54:04.484513 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:54:04.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.35:22-10.0.0.1:38028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:04.831654 systemd[1]: Started sshd@11-10.0.0.35:22-10.0.0.1:38028.service - OpenSSH per-connection server daemon (10.0.0.1:38028). Jul 2 06:54:04.836611 kernel: kauditd_printk_skb: 166 callbacks suppressed Jul 2 06:54:04.836773 kernel: audit: type=1130 audit(1719903244.831:620): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.35:22-10.0.0.1:38028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:04.863371 containerd[1272]: time="2024-07-02T06:54:04.863299792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:04.865562 containerd[1272]: time="2024-07-02T06:54:04.865505641Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 06:54:04.867134 containerd[1272]: time="2024-07-02T06:54:04.867083842Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:04.869502 containerd[1272]: time="2024-07-02T06:54:04.869444626Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:04.871986 containerd[1272]: time="2024-07-02T06:54:04.871924526Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:04.871000 audit[4322]: USER_ACCT pid=4322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:04.872668 containerd[1272]: time="2024-07-02T06:54:04.872605677Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 2.785517053s" Jul 2 06:54:04.872753 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 38028 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:54:04.873070 containerd[1272]: time="2024-07-02T06:54:04.872668907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 06:54:04.874798 containerd[1272]: time="2024-07-02T06:54:04.874279059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jul 2 06:54:04.875281 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:04.871000 audit[4322]: CRED_ACQ pid=4322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:04.877220 containerd[1272]: time="2024-07-02T06:54:04.876259938Z" level=info msg="CreateContainer within sandbox \"82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 06:54:04.883125 kernel: audit: type=1101 audit(1719903244.871:621): pid=4322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:04.883242 kernel: audit: type=1103 audit(1719903244.871:622): pid=4322 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:04.883289 kernel: audit: type=1006 audit(1719903244.871:623): pid=4322 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jul 2 06:54:04.888867 kernel: audit: type=1300 audit(1719903244.871:623): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5ce775e0 a2=3 a3=7f1d40d8c480 items=0 ppid=1 pid=4322 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:04.889137 kernel: audit: type=1327 audit(1719903244.871:623): proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:04.871000 audit[4322]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5ce775e0 a2=3 a3=7f1d40d8c480 items=0 ppid=1 pid=4322 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:04.871000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:04.883860 systemd-logind[1264]: New session 12 of user core. Jul 2 06:54:04.891165 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 06:54:04.902000 audit[4322]: USER_START pid=4322 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:04.906699 containerd[1272]: time="2024-07-02T06:54:04.906618738Z" level=info msg="CreateContainer within sandbox \"82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"339de6fcaff9896b365a1e42e13e17cef69dc3de48ab4425bc3bc6e69f7be9f9\"" Jul 2 06:54:04.904000 audit[4324]: CRED_ACQ pid=4324 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:04.909764 containerd[1272]: time="2024-07-02T06:54:04.909624493Z" level=info msg="StartContainer for \"339de6fcaff9896b365a1e42e13e17cef69dc3de48ab4425bc3bc6e69f7be9f9\"" Jul 2 06:54:04.910922 kernel: audit: type=1105 audit(1719903244.902:624): pid=4322 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:04.910970 kernel: audit: type=1103 audit(1719903244.904:625): pid=4324 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:04.938976 systemd[1]: Started cri-containerd-339de6fcaff9896b365a1e42e13e17cef69dc3de48ab4425bc3bc6e69f7be9f9.scope - libcontainer container 339de6fcaff9896b365a1e42e13e17cef69dc3de48ab4425bc3bc6e69f7be9f9. Jul 2 06:54:04.959000 audit: BPF prog-id=170 op=LOAD Jul 2 06:54:04.959000 audit[4332]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3972 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:04.966590 kernel: audit: type=1334 audit(1719903244.959:626): prog-id=170 op=LOAD Jul 2 06:54:04.966687 kernel: audit: type=1300 audit(1719903244.959:626): arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010d988 a2=78 a3=0 items=0 ppid=3972 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:04.959000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333396465366663616666393839366233363561316534326531336531 Jul 2 06:54:04.961000 audit: BPF prog-id=171 op=LOAD Jul 2 06:54:04.961000 audit[4332]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00010d720 a2=78 a3=0 items=0 ppid=3972 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:04.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333396465366663616666393839366233363561316534326531336531 Jul 2 06:54:04.961000 audit: BPF prog-id=171 op=UNLOAD Jul 2 06:54:04.961000 audit: BPF prog-id=170 op=UNLOAD Jul 2 06:54:04.961000 audit: BPF prog-id=172 op=LOAD Jul 2 06:54:04.961000 audit[4332]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00010dbe0 a2=78 a3=0 items=0 ppid=3972 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:04.961000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333396465366663616666393839366233363561316534326531336531 Jul 2 06:54:05.079236 kubelet[2344]: I0702 06:54:05.079144 2344 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 06:54:05.080107 kubelet[2344]: E0702 06:54:05.080079 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:54:05.151212 containerd[1272]: time="2024-07-02T06:54:05.151095229Z" level=info msg="StartContainer for \"339de6fcaff9896b365a1e42e13e17cef69dc3de48ab4425bc3bc6e69f7be9f9\" returns successfully" Jul 2 06:54:05.193895 sshd[4322]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:05.194000 audit[4322]: USER_END pid=4322 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:05.194000 audit[4322]: CRED_DISP pid=4322 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:05.198755 systemd-logind[1264]: Session 12 logged out. Waiting for processes to exit. Jul 2 06:54:05.198988 systemd[1]: sshd@11-10.0.0.35:22-10.0.0.1:38028.service: Deactivated successfully. Jul 2 06:54:05.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.35:22-10.0.0.1:38028 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:05.205457 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 06:54:05.206802 systemd-logind[1264]: Removed session 12. Jul 2 06:54:05.393928 systemd-networkd[1104]: cali2cd9f6fd9ad: Gained IPv6LL Jul 2 06:54:05.510557 kubelet[2344]: E0702 06:54:05.510453 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:54:05.510000 audit[2177]: AVC avc: denied { watch } for pid=2177 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:54:05.510000 audit[2177]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c000fdda80 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:54:05.510000 audit[2177]: AVC avc: denied { watch } for pid=2177 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:54:05.510000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:54:05.510000 audit[2177]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=b a1=c0009659e0 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:54:05.510000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:54:05.922235 systemd[1]: run-containerd-runc-k8s.io-1312c2c46e61b81e135ed1cf9cf961f2ac8c84cb932cd8b612ee7cf24c8fb322-runc.gY7hXg.mount: Deactivated successfully. Jul 2 06:54:06.024000 audit[2251]: AVC avc: denied { watch } for pid=2251 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=7755 scontext=system_u:system_r:container_t:s0:c246,c489 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:54:06.024000 audit[2251]: AVC avc: denied { watch } for pid=2251 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7761 scontext=system_u:system_r:container_t:s0:c246,c489 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:54:06.024000 audit[2251]: AVC avc: denied { watch } for pid=2251 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c246,c489 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:54:06.024000 audit[2251]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6a a1=c00e4bb7a0 a2=fc6 a3=0 items=0 ppid=2105 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c246,c489 key=(null) Jul 2 06:54:06.024000 audit[2251]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6b a1=c00e9a9320 a2=fc6 a3=0 items=0 ppid=2105 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c246,c489 key=(null) Jul 2 06:54:06.024000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3335002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jul 2 06:54:06.024000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3335002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jul 2 06:54:06.024000 audit[2251]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6d a1=c00ecd2960 a2=fc6 a3=0 items=0 ppid=2105 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c246,c489 key=(null) Jul 2 06:54:06.024000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3335002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jul 2 06:54:06.040000 audit[2251]: AVC avc: denied { watch } for pid=2251 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c246,c489 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:54:06.040000 audit[2251]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6e a1=c00d135c20 a2=fc6 a3=0 items=0 ppid=2105 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c246,c489 key=(null) Jul 2 06:54:06.040000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3335002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jul 2 06:54:06.056000 audit[2251]: AVC avc: denied { watch } for pid=2251 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7759 scontext=system_u:system_r:container_t:s0:c246,c489 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:54:06.056000 audit[2251]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6f a1=c00e9a9380 a2=fc6 a3=0 items=0 ppid=2105 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c246,c489 key=(null) Jul 2 06:54:06.056000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3335002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jul 2 06:54:06.067000 audit[2251]: AVC avc: denied { watch } for pid=2251 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c246,c489 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:54:06.067000 audit[2251]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=6e a1=c00e3fc740 a2=fc6 a3=0 items=0 ppid=2105 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c246,c489 key=(null) Jul 2 06:54:06.067000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D31302E302E302E3335002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B756265726E Jul 2 06:54:08.804171 containerd[1272]: time="2024-07-02T06:54:08.804097745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:08.812914 containerd[1272]: time="2024-07-02T06:54:08.812808607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Jul 2 06:54:08.824085 containerd[1272]: time="2024-07-02T06:54:08.823966811Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:08.826819 containerd[1272]: time="2024-07-02T06:54:08.826716198Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:08.829295 containerd[1272]: time="2024-07-02T06:54:08.829200700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:08.830134 containerd[1272]: time="2024-07-02T06:54:08.830081007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.955733909s" Jul 2 06:54:08.830215 containerd[1272]: time="2024-07-02T06:54:08.830129850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Jul 2 06:54:08.831255 containerd[1272]: time="2024-07-02T06:54:08.831217061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 06:54:08.837921 containerd[1272]: time="2024-07-02T06:54:08.837879671Z" level=info msg="CreateContainer within sandbox \"fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 2 06:54:09.080218 containerd[1272]: time="2024-07-02T06:54:09.080057447Z" level=info msg="CreateContainer within sandbox \"fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"339d60c4dd287f84a4b2219a3a01c231fb24c9149908bbd5b33f1643869763fb\"" Jul 2 06:54:09.081026 containerd[1272]: time="2024-07-02T06:54:09.080974473Z" level=info msg="StartContainer for \"339d60c4dd287f84a4b2219a3a01c231fb24c9149908bbd5b33f1643869763fb\"" Jul 2 06:54:09.108056 systemd[1]: Started cri-containerd-339d60c4dd287f84a4b2219a3a01c231fb24c9149908bbd5b33f1643869763fb.scope - libcontainer container 339d60c4dd287f84a4b2219a3a01c231fb24c9149908bbd5b33f1643869763fb. Jul 2 06:54:09.120000 audit: BPF prog-id=173 op=LOAD Jul 2 06:54:09.121000 audit: BPF prog-id=174 op=LOAD Jul 2 06:54:09.121000 audit[4434]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9988 a2=78 a3=0 items=0 ppid=4284 pid=4434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:09.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333396436306334646432383766383461346232323139613361303163 Jul 2 06:54:09.121000 audit: BPF prog-id=175 op=LOAD Jul 2 06:54:09.121000 audit[4434]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001a9720 a2=78 a3=0 items=0 ppid=4284 pid=4434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:09.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333396436306334646432383766383461346232323139613361303163 Jul 2 06:54:09.121000 audit: BPF prog-id=175 op=UNLOAD Jul 2 06:54:09.121000 audit: BPF prog-id=174 op=UNLOAD Jul 2 06:54:09.121000 audit: BPF prog-id=176 op=LOAD Jul 2 06:54:09.121000 audit[4434]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001a9be0 a2=78 a3=0 items=0 ppid=4284 pid=4434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:09.121000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3333396436306334646432383766383461346232323139613361303163 Jul 2 06:54:09.154360 containerd[1272]: time="2024-07-02T06:54:09.154304387Z" level=info msg="StartContainer for \"339d60c4dd287f84a4b2219a3a01c231fb24c9149908bbd5b33f1643869763fb\" returns successfully" Jul 2 06:54:09.530054 kubelet[2344]: I0702 06:54:09.529900 2344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bf5c69fc4-6jthk" podStartSLOduration=33.685872208 podStartE2EDuration="38.529849987s" podCreationTimestamp="2024-07-02 06:53:31 +0000 UTC" firstStartedPulling="2024-07-02 06:54:03.986586459 +0000 UTC m=+51.858387691" lastFinishedPulling="2024-07-02 06:54:08.830564238 +0000 UTC m=+56.702365470" observedRunningTime="2024-07-02 06:54:09.529611774 +0000 UTC m=+57.401413006" watchObservedRunningTime="2024-07-02 06:54:09.529849987 +0000 UTC m=+57.401651219" Jul 2 06:54:10.208482 systemd[1]: Started sshd@12-10.0.0.35:22-10.0.0.1:38034.service - OpenSSH per-connection server daemon (10.0.0.1:38034). Jul 2 06:54:10.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.35:22-10.0.0.1:38034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:10.210222 kernel: kauditd_printk_skb: 48 callbacks suppressed Jul 2 06:54:10.210276 kernel: audit: type=1130 audit(1719903250.207:648): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.35:22-10.0.0.1:38034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:10.255000 audit[4488]: USER_ACCT pid=4488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.256653 sshd[4488]: Accepted publickey for core from 10.0.0.1 port 38034 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:54:10.259264 sshd[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:10.256000 audit[4488]: CRED_ACQ pid=4488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.263933 kernel: audit: type=1101 audit(1719903250.255:649): pid=4488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.264089 kernel: audit: type=1103 audit(1719903250.256:650): pid=4488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.264130 kernel: audit: type=1006 audit(1719903250.257:651): pid=4488 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jul 2 06:54:10.266139 kernel: audit: type=1300 audit(1719903250.257:651): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc85f2e420 a2=3 a3=7fe4f37d4480 items=0 ppid=1 pid=4488 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:10.257000 audit[4488]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc85f2e420 a2=3 a3=7fe4f37d4480 items=0 ppid=1 pid=4488 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:10.265571 systemd-logind[1264]: New session 13 of user core. Jul 2 06:54:10.269501 kernel: audit: type=1327 audit(1719903250.257:651): proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:10.257000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:10.277176 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 06:54:10.282000 audit[4488]: USER_START pid=4488 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.284000 audit[4490]: CRED_ACQ pid=4490 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.290588 kernel: audit: type=1105 audit(1719903250.282:652): pid=4488 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.290653 kernel: audit: type=1103 audit(1719903250.284:653): pid=4490 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.389795 sshd[4488]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:10.390000 audit[4488]: USER_END pid=4488 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.390000 audit[4488]: CRED_DISP pid=4488 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.397309 kernel: audit: type=1106 audit(1719903250.390:654): pid=4488 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.397364 kernel: audit: type=1104 audit(1719903250.390:655): pid=4488 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.401518 systemd[1]: sshd@12-10.0.0.35:22-10.0.0.1:38034.service: Deactivated successfully. Jul 2 06:54:10.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.35:22-10.0.0.1:38034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:10.402219 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 06:54:10.402849 systemd-logind[1264]: Session 13 logged out. Waiting for processes to exit. Jul 2 06:54:10.409162 systemd[1]: Started sshd@13-10.0.0.35:22-10.0.0.1:38036.service - OpenSSH per-connection server daemon (10.0.0.1:38036). Jul 2 06:54:10.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.35:22-10.0.0.1:38036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:10.410143 systemd-logind[1264]: Removed session 13. Jul 2 06:54:10.439000 audit[4502]: USER_ACCT pid=4502 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.441031 sshd[4502]: Accepted publickey for core from 10.0.0.1 port 38036 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:54:10.440000 audit[4502]: CRED_ACQ pid=4502 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.440000 audit[4502]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff193b2410 a2=3 a3=7f842435a480 items=0 ppid=1 pid=4502 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:10.440000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:10.442151 sshd[4502]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:10.446623 systemd-logind[1264]: New session 14 of user core. Jul 2 06:54:10.451957 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 06:54:10.456000 audit[4502]: USER_START pid=4502 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.458000 audit[4504]: CRED_ACQ pid=4504 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.645776 sshd[4502]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:10.652220 systemd[1]: Started sshd@14-10.0.0.35:22-10.0.0.1:38066.service - OpenSSH per-connection server daemon (10.0.0.1:38066). Jul 2 06:54:10.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.35:22-10.0.0.1:38066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:10.652000 audit[4502]: USER_END pid=4502 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.652000 audit[4502]: CRED_DISP pid=4502 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.35:22-10.0.0.1:38036 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:10.655814 systemd-logind[1264]: Session 14 logged out. Waiting for processes to exit. Jul 2 06:54:10.656060 systemd[1]: sshd@13-10.0.0.35:22-10.0.0.1:38036.service: Deactivated successfully. Jul 2 06:54:10.656841 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 06:54:10.658579 systemd-logind[1264]: Removed session 14. Jul 2 06:54:10.706000 audit[4517]: USER_ACCT pid=4517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.707526 sshd[4517]: Accepted publickey for core from 10.0.0.1 port 38066 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:54:10.707000 audit[4517]: CRED_ACQ pid=4517 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.707000 audit[4517]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe6b4c6e10 a2=3 a3=7f558c2ed480 items=0 ppid=1 pid=4517 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:10.707000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:10.709394 sshd[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:10.715741 systemd-logind[1264]: New session 15 of user core. Jul 2 06:54:10.720981 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 06:54:10.726000 audit[4517]: USER_START pid=4517 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.728000 audit[4520]: CRED_ACQ pid=4520 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.766841 containerd[1272]: time="2024-07-02T06:54:10.766749225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:10.767581 containerd[1272]: time="2024-07-02T06:54:10.767493843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 06:54:10.768674 containerd[1272]: time="2024-07-02T06:54:10.768638020Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:10.770219 containerd[1272]: time="2024-07-02T06:54:10.770184154Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:10.772027 containerd[1272]: time="2024-07-02T06:54:10.771963541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:10.772763 containerd[1272]: time="2024-07-02T06:54:10.772694201Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.941436833s" Jul 2 06:54:10.772872 containerd[1272]: time="2024-07-02T06:54:10.772761028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 06:54:10.774924 containerd[1272]: time="2024-07-02T06:54:10.774878728Z" level=info msg="CreateContainer within sandbox \"82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 06:54:10.865307 sshd[4517]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:10.868000 audit[4517]: USER_END pid=4517 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.868000 audit[4517]: CRED_DISP pid=4517 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:10.871694 systemd[1]: sshd@14-10.0.0.35:22-10.0.0.1:38066.service: Deactivated successfully. Jul 2 06:54:10.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.35:22-10.0.0.1:38066 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:10.872558 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 06:54:10.873249 systemd-logind[1264]: Session 15 logged out. Waiting for processes to exit. Jul 2 06:54:10.874441 systemd-logind[1264]: Removed session 15. Jul 2 06:54:10.876561 containerd[1272]: time="2024-07-02T06:54:10.876503237Z" level=info msg="CreateContainer within sandbox \"82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"38048489457f7981d7bffcd5ba2a3a9d88e455a73a64c1c9c5ec25813f2541e2\"" Jul 2 06:54:10.877287 containerd[1272]: time="2024-07-02T06:54:10.877251171Z" level=info msg="StartContainer for \"38048489457f7981d7bffcd5ba2a3a9d88e455a73a64c1c9c5ec25813f2541e2\"" Jul 2 06:54:10.904065 systemd[1]: Started cri-containerd-38048489457f7981d7bffcd5ba2a3a9d88e455a73a64c1c9c5ec25813f2541e2.scope - libcontainer container 38048489457f7981d7bffcd5ba2a3a9d88e455a73a64c1c9c5ec25813f2541e2. Jul 2 06:54:10.921000 audit: BPF prog-id=177 op=LOAD Jul 2 06:54:10.921000 audit[4539]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00013b988 a2=78 a3=0 items=0 ppid=3972 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:10.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338303438343839343537663739383164376266666364356261326133 Jul 2 06:54:10.921000 audit: BPF prog-id=178 op=LOAD Jul 2 06:54:10.921000 audit[4539]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00013b720 a2=78 a3=0 items=0 ppid=3972 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:10.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338303438343839343537663739383164376266666364356261326133 Jul 2 06:54:10.921000 audit: BPF prog-id=178 op=UNLOAD Jul 2 06:54:10.921000 audit: BPF prog-id=177 op=UNLOAD Jul 2 06:54:10.921000 audit: BPF prog-id=179 op=LOAD Jul 2 06:54:10.921000 audit[4539]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00013bbe0 a2=78 a3=0 items=0 ppid=3972 pid=4539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:10.921000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338303438343839343537663739383164376266666364356261326133 Jul 2 06:54:10.938320 containerd[1272]: time="2024-07-02T06:54:10.938266636Z" level=info msg="StartContainer for \"38048489457f7981d7bffcd5ba2a3a9d88e455a73a64c1c9c5ec25813f2541e2\" returns successfully" Jul 2 06:54:11.281613 kubelet[2344]: I0702 06:54:11.281487 2344 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 06:54:11.282807 kubelet[2344]: I0702 06:54:11.282776 2344 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 06:54:11.536957 kubelet[2344]: I0702 06:54:11.536917 2344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-bdm7p" podStartSLOduration=31.850438401 podStartE2EDuration="40.536857368s" podCreationTimestamp="2024-07-02 06:53:31 +0000 UTC" firstStartedPulling="2024-07-02 06:54:02.08668349 +0000 UTC m=+49.958484722" lastFinishedPulling="2024-07-02 06:54:10.773102457 +0000 UTC m=+58.644903689" observedRunningTime="2024-07-02 06:54:11.536010626 +0000 UTC m=+59.407811858" watchObservedRunningTime="2024-07-02 06:54:11.536857368 +0000 UTC m=+59.408658600" Jul 2 06:54:12.208709 containerd[1272]: time="2024-07-02T06:54:12.208650877Z" level=info msg="StopPodSandbox for \"42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71\"" Jul 2 06:54:12.216630 containerd[1272]: time="2024-07-02T06:54:12.208799290Z" level=info msg="TearDown network for sandbox \"42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71\" successfully" Jul 2 06:54:12.216630 containerd[1272]: time="2024-07-02T06:54:12.216610487Z" level=info msg="StopPodSandbox for \"42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71\" returns successfully" Jul 2 06:54:12.217333 containerd[1272]: time="2024-07-02T06:54:12.217283758Z" level=info msg="RemovePodSandbox for \"42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71\"" Jul 2 06:54:12.229544 containerd[1272]: time="2024-07-02T06:54:12.219848095Z" level=info msg="Forcibly stopping sandbox \"42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71\"" Jul 2 06:54:12.229733 containerd[1272]: time="2024-07-02T06:54:12.229585556Z" level=info msg="TearDown network for sandbox \"42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71\" successfully" Jul 2 06:54:12.237656 containerd[1272]: time="2024-07-02T06:54:12.237602354Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 06:54:12.237861 containerd[1272]: time="2024-07-02T06:54:12.237690933Z" level=info msg="RemovePodSandbox \"42f6caed72adaec73bdc7ef7c20cc6c2d68ee561f2fdf069ed0baf07c56c1c71\" returns successfully" Jul 2 06:54:12.238390 containerd[1272]: time="2024-07-02T06:54:12.238344536Z" level=info msg="StopPodSandbox for \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\"" Jul 2 06:54:12.327008 containerd[1272]: 2024-07-02 06:54:12.285 [WARNING][4599] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--z6lht-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b7affa88-3cb8-490c-b055-74e0023a3b4f", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 53, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986", Pod:"coredns-76f75df574-z6lht", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4370af552be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:12.327008 containerd[1272]: 2024-07-02 06:54:12.286 [INFO][4599] k8s.go 608: Cleaning up netns ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Jul 2 06:54:12.327008 containerd[1272]: 2024-07-02 06:54:12.286 [INFO][4599] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" iface="eth0" netns="" Jul 2 06:54:12.327008 containerd[1272]: 2024-07-02 06:54:12.286 [INFO][4599] k8s.go 615: Releasing IP address(es) ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Jul 2 06:54:12.327008 containerd[1272]: 2024-07-02 06:54:12.286 [INFO][4599] utils.go 188: Calico CNI releasing IP address ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Jul 2 06:54:12.327008 containerd[1272]: 2024-07-02 06:54:12.316 [INFO][4607] ipam_plugin.go 411: Releasing address using handleID ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" HandleID="k8s-pod-network.833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Workload="localhost-k8s-coredns--76f75df574--z6lht-eth0" Jul 2 06:54:12.327008 containerd[1272]: 2024-07-02 06:54:12.316 [INFO][4607] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:12.327008 containerd[1272]: 2024-07-02 06:54:12.316 [INFO][4607] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:12.327008 containerd[1272]: 2024-07-02 06:54:12.322 [WARNING][4607] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" HandleID="k8s-pod-network.833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Workload="localhost-k8s-coredns--76f75df574--z6lht-eth0" Jul 2 06:54:12.327008 containerd[1272]: 2024-07-02 06:54:12.322 [INFO][4607] ipam_plugin.go 439: Releasing address using workloadID ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" HandleID="k8s-pod-network.833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Workload="localhost-k8s-coredns--76f75df574--z6lht-eth0" Jul 2 06:54:12.327008 containerd[1272]: 2024-07-02 06:54:12.324 [INFO][4607] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:12.327008 containerd[1272]: 2024-07-02 06:54:12.325 [INFO][4599] k8s.go 621: Teardown processing complete. ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Jul 2 06:54:12.327820 containerd[1272]: time="2024-07-02T06:54:12.327755691Z" level=info msg="TearDown network for sandbox \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\" successfully" Jul 2 06:54:12.327871 containerd[1272]: time="2024-07-02T06:54:12.327820545Z" level=info msg="StopPodSandbox for \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\" returns successfully" Jul 2 06:54:12.328276 containerd[1272]: time="2024-07-02T06:54:12.328246465Z" level=info msg="RemovePodSandbox for \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\"" Jul 2 06:54:12.328322 containerd[1272]: time="2024-07-02T06:54:12.328275430Z" level=info msg="Forcibly stopping sandbox \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\"" Jul 2 06:54:12.397018 containerd[1272]: 2024-07-02 06:54:12.364 [WARNING][4629] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--z6lht-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b7affa88-3cb8-490c-b055-74e0023a3b4f", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 53, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"63fabcdeaf3cb3e4d82de0597cfc3aa4cb58a7d627d6da9b20a01910d6913986", Pod:"coredns-76f75df574-z6lht", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4370af552be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:12.397018 containerd[1272]: 2024-07-02 06:54:12.365 [INFO][4629] k8s.go 608: Cleaning up netns ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Jul 2 06:54:12.397018 containerd[1272]: 2024-07-02 06:54:12.365 [INFO][4629] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" iface="eth0" netns="" Jul 2 06:54:12.397018 containerd[1272]: 2024-07-02 06:54:12.365 [INFO][4629] k8s.go 615: Releasing IP address(es) ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Jul 2 06:54:12.397018 containerd[1272]: 2024-07-02 06:54:12.365 [INFO][4629] utils.go 188: Calico CNI releasing IP address ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Jul 2 06:54:12.397018 containerd[1272]: 2024-07-02 06:54:12.385 [INFO][4637] ipam_plugin.go 411: Releasing address using handleID ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" HandleID="k8s-pod-network.833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Workload="localhost-k8s-coredns--76f75df574--z6lht-eth0" Jul 2 06:54:12.397018 containerd[1272]: 2024-07-02 06:54:12.385 [INFO][4637] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:12.397018 containerd[1272]: 2024-07-02 06:54:12.385 [INFO][4637] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:12.397018 containerd[1272]: 2024-07-02 06:54:12.390 [WARNING][4637] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" HandleID="k8s-pod-network.833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Workload="localhost-k8s-coredns--76f75df574--z6lht-eth0" Jul 2 06:54:12.397018 containerd[1272]: 2024-07-02 06:54:12.391 [INFO][4637] ipam_plugin.go 439: Releasing address using workloadID ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" HandleID="k8s-pod-network.833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Workload="localhost-k8s-coredns--76f75df574--z6lht-eth0" Jul 2 06:54:12.397018 containerd[1272]: 2024-07-02 06:54:12.392 [INFO][4637] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:12.397018 containerd[1272]: 2024-07-02 06:54:12.394 [INFO][4629] k8s.go 621: Teardown processing complete. ContainerID="833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a" Jul 2 06:54:12.397507 containerd[1272]: time="2024-07-02T06:54:12.397071624Z" level=info msg="TearDown network for sandbox \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\" successfully" Jul 2 06:54:12.447021 containerd[1272]: time="2024-07-02T06:54:12.446962650Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 06:54:12.447207 containerd[1272]: time="2024-07-02T06:54:12.447028845Z" level=info msg="RemovePodSandbox \"833ba7b5e44e990e8cf46a034037cebb83e90ce77622e4f07e1583e8088b262a\" returns successfully" Jul 2 06:54:12.447418 containerd[1272]: time="2024-07-02T06:54:12.447395984Z" level=info msg="StopPodSandbox for \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\"" Jul 2 06:54:12.508038 containerd[1272]: 2024-07-02 06:54:12.478 [WARNING][4660] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bdm7p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f47c1652-2b34-4c56-adf0-effec8bb0963", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62", Pod:"csi-node-driver-bdm7p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali899dc11f37a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:12.508038 containerd[1272]: 2024-07-02 06:54:12.478 [INFO][4660] k8s.go 608: Cleaning up netns ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Jul 2 06:54:12.508038 containerd[1272]: 2024-07-02 06:54:12.478 [INFO][4660] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" iface="eth0" netns="" Jul 2 06:54:12.508038 containerd[1272]: 2024-07-02 06:54:12.478 [INFO][4660] k8s.go 615: Releasing IP address(es) ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Jul 2 06:54:12.508038 containerd[1272]: 2024-07-02 06:54:12.478 [INFO][4660] utils.go 188: Calico CNI releasing IP address ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Jul 2 06:54:12.508038 containerd[1272]: 2024-07-02 06:54:12.497 [INFO][4668] ipam_plugin.go 411: Releasing address using handleID ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" HandleID="k8s-pod-network.67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Workload="localhost-k8s-csi--node--driver--bdm7p-eth0" Jul 2 06:54:12.508038 containerd[1272]: 2024-07-02 06:54:12.497 [INFO][4668] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:12.508038 containerd[1272]: 2024-07-02 06:54:12.497 [INFO][4668] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:12.508038 containerd[1272]: 2024-07-02 06:54:12.503 [WARNING][4668] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" HandleID="k8s-pod-network.67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Workload="localhost-k8s-csi--node--driver--bdm7p-eth0" Jul 2 06:54:12.508038 containerd[1272]: 2024-07-02 06:54:12.503 [INFO][4668] ipam_plugin.go 439: Releasing address using workloadID ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" HandleID="k8s-pod-network.67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Workload="localhost-k8s-csi--node--driver--bdm7p-eth0" Jul 2 06:54:12.508038 containerd[1272]: 2024-07-02 06:54:12.504 [INFO][4668] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:12.508038 containerd[1272]: 2024-07-02 06:54:12.505 [INFO][4660] k8s.go 621: Teardown processing complete. ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Jul 2 06:54:12.508038 containerd[1272]: time="2024-07-02T06:54:12.508016408Z" level=info msg="TearDown network for sandbox \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\" successfully" Jul 2 06:54:12.508612 containerd[1272]: time="2024-07-02T06:54:12.508054640Z" level=info msg="StopPodSandbox for \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\" returns successfully" Jul 2 06:54:12.509603 containerd[1272]: time="2024-07-02T06:54:12.509460814Z" level=info msg="RemovePodSandbox for \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\"" Jul 2 06:54:12.509603 containerd[1272]: time="2024-07-02T06:54:12.509565734Z" level=info msg="Forcibly stopping sandbox \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\"" Jul 2 06:54:12.579881 containerd[1272]: 2024-07-02 06:54:12.548 [WARNING][4691] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--bdm7p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f47c1652-2b34-4c56-adf0-effec8bb0963", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"82dfad015a44a88466f2781c8ac847d565b6aff1521b3a2f73e6b1a2ca1f7d62", Pod:"csi-node-driver-bdm7p", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali899dc11f37a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:12.579881 containerd[1272]: 2024-07-02 06:54:12.548 [INFO][4691] k8s.go 608: Cleaning up netns ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Jul 2 06:54:12.579881 containerd[1272]: 2024-07-02 06:54:12.548 [INFO][4691] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" iface="eth0" netns="" Jul 2 06:54:12.579881 containerd[1272]: 2024-07-02 06:54:12.548 [INFO][4691] k8s.go 615: Releasing IP address(es) ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Jul 2 06:54:12.579881 containerd[1272]: 2024-07-02 06:54:12.549 [INFO][4691] utils.go 188: Calico CNI releasing IP address ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Jul 2 06:54:12.579881 containerd[1272]: 2024-07-02 06:54:12.571 [INFO][4699] ipam_plugin.go 411: Releasing address using handleID ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" HandleID="k8s-pod-network.67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Workload="localhost-k8s-csi--node--driver--bdm7p-eth0" Jul 2 06:54:12.579881 containerd[1272]: 2024-07-02 06:54:12.571 [INFO][4699] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:12.579881 containerd[1272]: 2024-07-02 06:54:12.571 [INFO][4699] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:12.579881 containerd[1272]: 2024-07-02 06:54:12.575 [WARNING][4699] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" HandleID="k8s-pod-network.67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Workload="localhost-k8s-csi--node--driver--bdm7p-eth0" Jul 2 06:54:12.579881 containerd[1272]: 2024-07-02 06:54:12.576 [INFO][4699] ipam_plugin.go 439: Releasing address using workloadID ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" HandleID="k8s-pod-network.67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Workload="localhost-k8s-csi--node--driver--bdm7p-eth0" Jul 2 06:54:12.579881 containerd[1272]: 2024-07-02 06:54:12.577 [INFO][4699] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:12.579881 containerd[1272]: 2024-07-02 06:54:12.578 [INFO][4691] k8s.go 621: Teardown processing complete. ContainerID="67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f" Jul 2 06:54:12.580397 containerd[1272]: time="2024-07-02T06:54:12.579925602Z" level=info msg="TearDown network for sandbox \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\" successfully" Jul 2 06:54:12.583506 containerd[1272]: time="2024-07-02T06:54:12.583441951Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 06:54:12.583506 containerd[1272]: time="2024-07-02T06:54:12.583511854Z" level=info msg="RemovePodSandbox \"67c1b71ee589a372a2335c1818c304e78efff6b2a82432adaeafd49a4ebc157f\" returns successfully" Jul 2 06:54:12.584074 containerd[1272]: time="2024-07-02T06:54:12.584045269Z" level=info msg="StopPodSandbox for \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\"" Jul 2 06:54:12.656216 containerd[1272]: 2024-07-02 06:54:12.623 [WARNING][4721] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--st82c-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d461e7f3-5eb3-4f3a-bd28-01c916db29c2", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 53, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637", Pod:"coredns-76f75df574-st82c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3218e1c2236", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:12.656216 containerd[1272]: 2024-07-02 06:54:12.623 [INFO][4721] k8s.go 608: Cleaning up netns ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Jul 2 06:54:12.656216 containerd[1272]: 2024-07-02 06:54:12.623 [INFO][4721] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" iface="eth0" netns="" Jul 2 06:54:12.656216 containerd[1272]: 2024-07-02 06:54:12.623 [INFO][4721] k8s.go 615: Releasing IP address(es) ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Jul 2 06:54:12.656216 containerd[1272]: 2024-07-02 06:54:12.623 [INFO][4721] utils.go 188: Calico CNI releasing IP address ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Jul 2 06:54:12.656216 containerd[1272]: 2024-07-02 06:54:12.646 [INFO][4728] ipam_plugin.go 411: Releasing address using handleID ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" HandleID="k8s-pod-network.8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Workload="localhost-k8s-coredns--76f75df574--st82c-eth0" Jul 2 06:54:12.656216 containerd[1272]: 2024-07-02 06:54:12.646 [INFO][4728] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:12.656216 containerd[1272]: 2024-07-02 06:54:12.646 [INFO][4728] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:12.656216 containerd[1272]: 2024-07-02 06:54:12.651 [WARNING][4728] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" HandleID="k8s-pod-network.8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Workload="localhost-k8s-coredns--76f75df574--st82c-eth0" Jul 2 06:54:12.656216 containerd[1272]: 2024-07-02 06:54:12.651 [INFO][4728] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" HandleID="k8s-pod-network.8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Workload="localhost-k8s-coredns--76f75df574--st82c-eth0" Jul 2 06:54:12.656216 containerd[1272]: 2024-07-02 06:54:12.653 [INFO][4728] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:12.656216 containerd[1272]: 2024-07-02 06:54:12.654 [INFO][4721] k8s.go 621: Teardown processing complete. ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Jul 2 06:54:12.656796 containerd[1272]: time="2024-07-02T06:54:12.656265525Z" level=info msg="TearDown network for sandbox \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\" successfully" Jul 2 06:54:12.656796 containerd[1272]: time="2024-07-02T06:54:12.656302335Z" level=info msg="StopPodSandbox for \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\" returns successfully" Jul 2 06:54:12.656890 containerd[1272]: time="2024-07-02T06:54:12.656850959Z" level=info msg="RemovePodSandbox for \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\"" Jul 2 06:54:12.656928 containerd[1272]: time="2024-07-02T06:54:12.656894772Z" level=info msg="Forcibly stopping sandbox \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\"" Jul 2 06:54:12.724097 containerd[1272]: 2024-07-02 06:54:12.694 [WARNING][4751] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--st82c-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d461e7f3-5eb3-4f3a-bd28-01c916db29c2", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 53, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bbe5fee7f3f61bfbc25639716c2db5d1618b4d4f7d0a977679bf04b63112d637", Pod:"coredns-76f75df574-st82c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3218e1c2236", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:12.724097 containerd[1272]: 2024-07-02 06:54:12.694 [INFO][4751] k8s.go 608: Cleaning up netns ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Jul 2 06:54:12.724097 containerd[1272]: 2024-07-02 06:54:12.694 [INFO][4751] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" iface="eth0" netns="" Jul 2 06:54:12.724097 containerd[1272]: 2024-07-02 06:54:12.694 [INFO][4751] k8s.go 615: Releasing IP address(es) ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Jul 2 06:54:12.724097 containerd[1272]: 2024-07-02 06:54:12.694 [INFO][4751] utils.go 188: Calico CNI releasing IP address ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Jul 2 06:54:12.724097 containerd[1272]: 2024-07-02 06:54:12.713 [INFO][4758] ipam_plugin.go 411: Releasing address using handleID ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" HandleID="k8s-pod-network.8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Workload="localhost-k8s-coredns--76f75df574--st82c-eth0" Jul 2 06:54:12.724097 containerd[1272]: 2024-07-02 06:54:12.713 [INFO][4758] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:12.724097 containerd[1272]: 2024-07-02 06:54:12.713 [INFO][4758] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:12.724097 containerd[1272]: 2024-07-02 06:54:12.719 [WARNING][4758] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" HandleID="k8s-pod-network.8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Workload="localhost-k8s-coredns--76f75df574--st82c-eth0" Jul 2 06:54:12.724097 containerd[1272]: 2024-07-02 06:54:12.719 [INFO][4758] ipam_plugin.go 439: Releasing address using workloadID ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" HandleID="k8s-pod-network.8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Workload="localhost-k8s-coredns--76f75df574--st82c-eth0" Jul 2 06:54:12.724097 containerd[1272]: 2024-07-02 06:54:12.721 [INFO][4758] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:12.724097 containerd[1272]: 2024-07-02 06:54:12.722 [INFO][4751] k8s.go 621: Teardown processing complete. ContainerID="8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc" Jul 2 06:54:12.724751 containerd[1272]: time="2024-07-02T06:54:12.724112985Z" level=info msg="TearDown network for sandbox \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\" successfully" Jul 2 06:54:12.784152 containerd[1272]: time="2024-07-02T06:54:12.783980046Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 06:54:12.784152 containerd[1272]: time="2024-07-02T06:54:12.784062643Z" level=info msg="RemovePodSandbox \"8e8234379397eaac6559680a0265326457a171a5edba0c28604e55ff9ae6f1cc\" returns successfully" Jul 2 06:54:12.784653 containerd[1272]: time="2024-07-02T06:54:12.784612408Z" level=info msg="StopPodSandbox for \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\"" Jul 2 06:54:12.845330 containerd[1272]: 2024-07-02 06:54:12.816 [WARNING][4782] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0", GenerateName:"calico-kube-controllers-7bf5c69fc4-", Namespace:"calico-system", SelfLink:"", UID:"d5281e8f-feca-4ab3-9b29-c3d038aed0d0", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bf5c69fc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38", Pod:"calico-kube-controllers-7bf5c69fc4-6jthk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2cd9f6fd9ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:12.845330 containerd[1272]: 2024-07-02 06:54:12.816 [INFO][4782] k8s.go 608: Cleaning up netns ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Jul 2 06:54:12.845330 containerd[1272]: 2024-07-02 06:54:12.816 [INFO][4782] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" iface="eth0" netns="" Jul 2 06:54:12.845330 containerd[1272]: 2024-07-02 06:54:12.816 [INFO][4782] k8s.go 615: Releasing IP address(es) ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Jul 2 06:54:12.845330 containerd[1272]: 2024-07-02 06:54:12.816 [INFO][4782] utils.go 188: Calico CNI releasing IP address ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Jul 2 06:54:12.845330 containerd[1272]: 2024-07-02 06:54:12.835 [INFO][4789] ipam_plugin.go 411: Releasing address using handleID ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" HandleID="k8s-pod-network.ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Workload="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" Jul 2 06:54:12.845330 containerd[1272]: 2024-07-02 06:54:12.835 [INFO][4789] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:12.845330 containerd[1272]: 2024-07-02 06:54:12.835 [INFO][4789] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:12.845330 containerd[1272]: 2024-07-02 06:54:12.841 [WARNING][4789] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" HandleID="k8s-pod-network.ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Workload="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" Jul 2 06:54:12.845330 containerd[1272]: 2024-07-02 06:54:12.841 [INFO][4789] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" HandleID="k8s-pod-network.ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Workload="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" Jul 2 06:54:12.845330 containerd[1272]: 2024-07-02 06:54:12.842 [INFO][4789] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:12.845330 containerd[1272]: 2024-07-02 06:54:12.844 [INFO][4782] k8s.go 621: Teardown processing complete. ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Jul 2 06:54:12.845903 containerd[1272]: time="2024-07-02T06:54:12.845375133Z" level=info msg="TearDown network for sandbox \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\" successfully" Jul 2 06:54:12.845903 containerd[1272]: time="2024-07-02T06:54:12.845412574Z" level=info msg="StopPodSandbox for \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\" returns successfully" Jul 2 06:54:12.845953 containerd[1272]: time="2024-07-02T06:54:12.845904960Z" level=info msg="RemovePodSandbox for \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\"" Jul 2 06:54:12.845975 containerd[1272]: time="2024-07-02T06:54:12.845930850Z" level=info msg="Forcibly stopping sandbox \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\"" Jul 2 06:54:12.908412 containerd[1272]: 2024-07-02 06:54:12.879 [WARNING][4813] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0", GenerateName:"calico-kube-controllers-7bf5c69fc4-", Namespace:"calico-system", SelfLink:"", UID:"d5281e8f-feca-4ab3-9b29-c3d038aed0d0", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 53, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bf5c69fc4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fe59b26df4abf95ecf7c653a24c2c9eb9120b306e794157ce2829ac0e074de38", Pod:"calico-kube-controllers-7bf5c69fc4-6jthk", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali2cd9f6fd9ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:12.908412 containerd[1272]: 2024-07-02 06:54:12.879 [INFO][4813] k8s.go 608: Cleaning up netns ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Jul 2 06:54:12.908412 containerd[1272]: 2024-07-02 06:54:12.879 [INFO][4813] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" iface="eth0" netns="" Jul 2 06:54:12.908412 containerd[1272]: 2024-07-02 06:54:12.879 [INFO][4813] k8s.go 615: Releasing IP address(es) ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Jul 2 06:54:12.908412 containerd[1272]: 2024-07-02 06:54:12.879 [INFO][4813] utils.go 188: Calico CNI releasing IP address ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Jul 2 06:54:12.908412 containerd[1272]: 2024-07-02 06:54:12.898 [INFO][4821] ipam_plugin.go 411: Releasing address using handleID ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" HandleID="k8s-pod-network.ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Workload="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" Jul 2 06:54:12.908412 containerd[1272]: 2024-07-02 06:54:12.898 [INFO][4821] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:12.908412 containerd[1272]: 2024-07-02 06:54:12.898 [INFO][4821] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:12.908412 containerd[1272]: 2024-07-02 06:54:12.904 [WARNING][4821] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" HandleID="k8s-pod-network.ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Workload="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" Jul 2 06:54:12.908412 containerd[1272]: 2024-07-02 06:54:12.904 [INFO][4821] ipam_plugin.go 439: Releasing address using workloadID ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" HandleID="k8s-pod-network.ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Workload="localhost-k8s-calico--kube--controllers--7bf5c69fc4--6jthk-eth0" Jul 2 06:54:12.908412 containerd[1272]: 2024-07-02 06:54:12.905 [INFO][4821] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:12.908412 containerd[1272]: 2024-07-02 06:54:12.907 [INFO][4813] k8s.go 621: Teardown processing complete. ContainerID="ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453" Jul 2 06:54:12.909034 containerd[1272]: time="2024-07-02T06:54:12.908979642Z" level=info msg="TearDown network for sandbox \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\" successfully" Jul 2 06:54:12.964209 containerd[1272]: time="2024-07-02T06:54:12.964144198Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 06:54:12.964422 containerd[1272]: time="2024-07-02T06:54:12.964272171Z" level=info msg="RemovePodSandbox \"ec03d16637de86b134c370e521786eae246590a4e7d5fe00d8306f9e12cd9453\" returns successfully" Jul 2 06:54:15.884018 systemd[1]: Started sshd@15-10.0.0.35:22-10.0.0.1:33896.service - OpenSSH per-connection server daemon (10.0.0.1:33896). Jul 2 06:54:15.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.35:22-10.0.0.1:33896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:15.885169 kernel: kauditd_printk_skb: 34 callbacks suppressed Jul 2 06:54:15.885238 kernel: audit: type=1130 audit(1719903255.883:680): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.35:22-10.0.0.1:33896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:15.921000 audit[4835]: USER_ACCT pid=4835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:15.922686 sshd[4835]: Accepted publickey for core from 10.0.0.1 port 33896 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:54:15.924392 sshd[4835]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:15.922000 audit[4835]: CRED_ACQ pid=4835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:15.928324 systemd-logind[1264]: New session 16 of user core. Jul 2 06:54:15.930395 kernel: audit: type=1101 audit(1719903255.921:681): pid=4835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:15.930450 kernel: audit: type=1103 audit(1719903255.922:682): pid=4835 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:15.930483 kernel: audit: type=1006 audit(1719903255.922:683): pid=4835 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 2 06:54:15.922000 audit[4835]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeae17b010 a2=3 a3=7f58780f2480 items=0 ppid=1 pid=4835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:15.936937 kernel: audit: type=1300 audit(1719903255.922:683): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeae17b010 a2=3 a3=7f58780f2480 items=0 ppid=1 pid=4835 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:15.936989 kernel: audit: type=1327 audit(1719903255.922:683): proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:15.922000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:15.944032 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 06:54:15.947000 audit[4835]: USER_START pid=4835 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:15.949000 audit[4837]: CRED_ACQ pid=4837 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:15.956860 kernel: audit: type=1105 audit(1719903255.947:684): pid=4835 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:15.956920 kernel: audit: type=1103 audit(1719903255.949:685): pid=4837 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:16.066716 sshd[4835]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:16.066000 audit[4835]: USER_END pid=4835 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:16.069631 systemd[1]: sshd@15-10.0.0.35:22-10.0.0.1:33896.service: Deactivated successfully. Jul 2 06:54:16.070496 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 06:54:16.071027 systemd-logind[1264]: Session 16 logged out. Waiting for processes to exit. Jul 2 06:54:16.067000 audit[4835]: CRED_DISP pid=4835 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:16.071750 systemd-logind[1264]: Removed session 16. Jul 2 06:54:16.073848 kernel: audit: type=1106 audit(1719903256.066:686): pid=4835 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:16.073899 kernel: audit: type=1104 audit(1719903256.067:687): pid=4835 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:16.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.35:22-10.0.0.1:33896 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:16.560014 systemd[1]: run-containerd-runc-k8s.io-339d60c4dd287f84a4b2219a3a01c231fb24c9149908bbd5b33f1643869763fb-runc.V1iG7B.mount: Deactivated successfully. Jul 2 06:54:21.086113 systemd[1]: Started sshd@16-10.0.0.35:22-10.0.0.1:33918.service - OpenSSH per-connection server daemon (10.0.0.1:33918). Jul 2 06:54:21.087847 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 06:54:21.087893 kernel: audit: type=1130 audit(1719903261.085:689): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.35:22-10.0.0.1:33918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.35:22-10.0.0.1:33918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:21.118000 audit[4868]: USER_ACCT pid=4868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:21.119060 sshd[4868]: Accepted publickey for core from 10.0.0.1 port 33918 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:54:21.120398 sshd[4868]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:21.119000 audit[4868]: CRED_ACQ pid=4868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:21.124519 systemd-logind[1264]: New session 17 of user core. Jul 2 06:54:21.125091 kernel: audit: type=1101 audit(1719903261.118:690): pid=4868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:21.125124 kernel: audit: type=1103 audit(1719903261.119:691): pid=4868 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:21.125144 kernel: audit: type=1006 audit(1719903261.119:692): pid=4868 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jul 2 06:54:21.119000 audit[4868]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe15cad3d0 a2=3 a3=7ffbc1296480 items=0 ppid=1 pid=4868 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:21.129714 kernel: audit: type=1300 audit(1719903261.119:692): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe15cad3d0 a2=3 a3=7ffbc1296480 items=0 ppid=1 pid=4868 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:21.129771 kernel: audit: type=1327 audit(1719903261.119:692): proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:21.119000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:21.141084 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 06:54:21.144000 audit[4868]: USER_START pid=4868 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:21.145000 audit[4870]: CRED_ACQ pid=4870 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:21.150941 kernel: audit: type=1105 audit(1719903261.144:693): pid=4868 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:21.150993 kernel: audit: type=1103 audit(1719903261.145:694): pid=4870 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:21.219274 kubelet[2344]: E0702 06:54:21.219227 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:54:21.246702 sshd[4868]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:21.247000 audit[4868]: USER_END pid=4868 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:21.249418 systemd[1]: sshd@16-10.0.0.35:22-10.0.0.1:33918.service: Deactivated successfully. Jul 2 06:54:21.250294 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 06:54:21.250932 systemd-logind[1264]: Session 17 logged out. Waiting for processes to exit. Jul 2 06:54:21.247000 audit[4868]: CRED_DISP pid=4868 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:21.251886 systemd-logind[1264]: Removed session 17. Jul 2 06:54:21.254392 kernel: audit: type=1106 audit(1719903261.247:695): pid=4868 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:21.254493 kernel: audit: type=1104 audit(1719903261.247:696): pid=4868 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:21.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.35:22-10.0.0.1:33918 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:24.187000 audit[2177]: AVC avc: denied { watch } for pid=2177 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:54:24.187000 audit[2177]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c0025cb0a0 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:54:24.187000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:54:24.187000 audit[2177]: AVC avc: denied { watch } for pid=2177 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:54:24.187000 audit[2177]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=c a1=c002356d20 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:54:24.187000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:54:24.187000 audit[2177]: AVC avc: denied { watch } for pid=2177 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:54:24.187000 audit[2177]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002356d40 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:54:24.187000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:54:24.189000 audit[2177]: AVC avc: denied { watch } for pid=2177 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7753 scontext=system_u:system_r:container_t:s0:c237,c775 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jul 2 06:54:24.189000 audit[2177]: SYSCALL arch=c000003e syscall=254 success=no exit=-13 a0=a a1=c002356d60 a2=fc6 a3=0 items=0 ppid=2073 pid=2177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c237,c775 key=(null) Jul 2 06:54:24.189000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jul 2 06:54:26.266157 systemd[1]: Started sshd@17-10.0.0.35:22-10.0.0.1:47522.service - OpenSSH per-connection server daemon (10.0.0.1:47522). Jul 2 06:54:26.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.35:22-10.0.0.1:47522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:26.270933 kernel: kauditd_printk_skb: 13 callbacks suppressed Jul 2 06:54:26.271047 kernel: audit: type=1130 audit(1719903266.264:702): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.35:22-10.0.0.1:47522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:26.309000 audit[4896]: USER_ACCT pid=4896 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:26.313316 sshd[4896]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:26.316590 sshd[4896]: Accepted publickey for core from 10.0.0.1 port 47522 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:54:26.316902 kernel: audit: type=1101 audit(1719903266.309:703): pid=4896 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:26.311000 audit[4896]: CRED_ACQ pid=4896 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:26.322675 kernel: audit: type=1103 audit(1719903266.311:704): pid=4896 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:26.322753 kernel: audit: type=1006 audit(1719903266.311:705): pid=4896 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jul 2 06:54:26.322796 kernel: audit: type=1300 audit(1719903266.311:705): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3c1e4490 a2=3 a3=7f6dc7224480 items=0 ppid=1 pid=4896 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:26.311000 audit[4896]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3c1e4490 a2=3 a3=7f6dc7224480 items=0 ppid=1 pid=4896 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:26.325601 kubelet[2344]: I0702 06:54:26.325538 2344 topology_manager.go:215] "Topology Admit Handler" podUID="726c442d-5dcf-409e-af08-447e2853be61" podNamespace="calico-apiserver" podName="calico-apiserver-59bd8874df-dbftp" Jul 2 06:54:26.336120 kernel: audit: type=1327 audit(1719903266.311:705): proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:26.336154 kernel: audit: type=1325 audit(1719903266.327:706): table=filter:111 family=2 entries=9 op=nft_register_rule pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:26.336171 kernel: audit: type=1300 audit(1719903266.327:706): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdbb967780 a2=0 a3=7ffdbb96776c items=0 ppid=2535 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:26.336227 kernel: audit: type=1327 audit(1719903266.327:706): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:26.311000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:26.327000 audit[4899]: NETFILTER_CFG table=filter:111 family=2 entries=9 op=nft_register_rule pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:26.327000 audit[4899]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffdbb967780 a2=0 a3=7ffdbb96776c items=0 ppid=2535 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:26.327000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:26.327362 systemd-logind[1264]: New session 18 of user core. Jul 2 06:54:26.336095 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 06:54:26.340000 audit[4899]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:26.340000 audit[4899]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffdbb967780 a2=0 a3=7ffdbb96776c items=0 ppid=2535 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:26.340000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:26.344405 systemd[1]: Created slice kubepods-besteffort-pod726c442d_5dcf_409e_af08_447e2853be61.slice - libcontainer container kubepods-besteffort-pod726c442d_5dcf_409e_af08_447e2853be61.slice. Jul 2 06:54:26.344831 kernel: audit: type=1325 audit(1719903266.340:707): table=nat:112 family=2 entries=20 op=nft_register_rule pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:26.348000 audit[4901]: NETFILTER_CFG table=filter:113 family=2 entries=10 op=nft_register_rule pid=4901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:26.348000 audit[4901]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffe023690b0 a2=0 a3=7ffe0236909c items=0 ppid=2535 pid=4901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:26.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:26.348000 audit[4896]: USER_START pid=4896 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:26.350000 audit[4902]: CRED_ACQ pid=4902 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:26.348000 audit[4901]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=4901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:26.348000 audit[4901]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffe023690b0 a2=0 a3=7ffe0236909c items=0 ppid=2535 pid=4901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:26.348000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:26.458196 sshd[4896]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:26.457000 audit[4896]: USER_END pid=4896 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:26.457000 audit[4896]: CRED_DISP pid=4896 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:26.460615 systemd[1]: sshd@17-10.0.0.35:22-10.0.0.1:47522.service: Deactivated successfully. Jul 2 06:54:26.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.35:22-10.0.0.1:47522 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:26.461384 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 06:54:26.461921 systemd-logind[1264]: Session 18 logged out. Waiting for processes to exit. Jul 2 06:54:26.462551 systemd-logind[1264]: Removed session 18. Jul 2 06:54:26.519092 kubelet[2344]: I0702 06:54:26.518934 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7bnk\" (UniqueName: \"kubernetes.io/projected/726c442d-5dcf-409e-af08-447e2853be61-kube-api-access-v7bnk\") pod \"calico-apiserver-59bd8874df-dbftp\" (UID: \"726c442d-5dcf-409e-af08-447e2853be61\") " pod="calico-apiserver/calico-apiserver-59bd8874df-dbftp" Jul 2 06:54:26.519092 kubelet[2344]: I0702 06:54:26.518994 2344 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/726c442d-5dcf-409e-af08-447e2853be61-calico-apiserver-certs\") pod \"calico-apiserver-59bd8874df-dbftp\" (UID: \"726c442d-5dcf-409e-af08-447e2853be61\") " pod="calico-apiserver/calico-apiserver-59bd8874df-dbftp" Jul 2 06:54:26.620374 kubelet[2344]: E0702 06:54:26.620309 2344 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 06:54:26.620767 kubelet[2344]: E0702 06:54:26.620440 2344 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/726c442d-5dcf-409e-af08-447e2853be61-calico-apiserver-certs podName:726c442d-5dcf-409e-af08-447e2853be61 nodeName:}" failed. No retries permitted until 2024-07-02 06:54:27.120418648 +0000 UTC m=+74.992219880 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/726c442d-5dcf-409e-af08-447e2853be61-calico-apiserver-certs") pod "calico-apiserver-59bd8874df-dbftp" (UID: "726c442d-5dcf-409e-af08-447e2853be61") : secret "calico-apiserver-certs" not found Jul 2 06:54:27.219304 kubelet[2344]: E0702 06:54:27.219208 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:54:27.248370 containerd[1272]: time="2024-07-02T06:54:27.248305197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59bd8874df-dbftp,Uid:726c442d-5dcf-409e-af08-447e2853be61,Namespace:calico-apiserver,Attempt:0,}" Jul 2 06:54:27.377701 systemd-networkd[1104]: cali127f1f2309b: Link UP Jul 2 06:54:27.379612 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 06:54:27.379690 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali127f1f2309b: link becomes ready Jul 2 06:54:27.379983 systemd-networkd[1104]: cali127f1f2309b: Gained carrier Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.301 [INFO][4915] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59bd8874df--dbftp-eth0 calico-apiserver-59bd8874df- calico-apiserver 726c442d-5dcf-409e-af08-447e2853be61 1113 0 2024-07-02 06:54:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59bd8874df projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59bd8874df-dbftp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali127f1f2309b [] []}} ContainerID="e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" Namespace="calico-apiserver" Pod="calico-apiserver-59bd8874df-dbftp" WorkloadEndpoint="localhost-k8s-calico--apiserver--59bd8874df--dbftp-" Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.301 [INFO][4915] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" Namespace="calico-apiserver" Pod="calico-apiserver-59bd8874df-dbftp" WorkloadEndpoint="localhost-k8s-calico--apiserver--59bd8874df--dbftp-eth0" Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.331 [INFO][4928] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" HandleID="k8s-pod-network.e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" Workload="localhost-k8s-calico--apiserver--59bd8874df--dbftp-eth0" Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.340 [INFO][4928] ipam_plugin.go 264: Auto assigning IP ContainerID="e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" HandleID="k8s-pod-network.e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" Workload="localhost-k8s-calico--apiserver--59bd8874df--dbftp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000134020), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-59bd8874df-dbftp", "timestamp":"2024-07-02 06:54:27.33126906 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.340 [INFO][4928] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.341 [INFO][4928] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.341 [INFO][4928] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.342 [INFO][4928] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" host="localhost" Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.348 [INFO][4928] ipam.go 372: Looking up existing affinities for host host="localhost" Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.352 [INFO][4928] ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.354 [INFO][4928] ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.359 [INFO][4928] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.360 [INFO][4928] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" host="localhost" Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.363 [INFO][4928] ipam.go 1685: Creating new handle: k8s-pod-network.e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.367 [INFO][4928] ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" host="localhost" Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.372 [INFO][4928] ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" host="localhost" Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.372 [INFO][4928] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" host="localhost" Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.372 [INFO][4928] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 06:54:27.393090 containerd[1272]: 2024-07-02 06:54:27.372 [INFO][4928] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" HandleID="k8s-pod-network.e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" Workload="localhost-k8s-calico--apiserver--59bd8874df--dbftp-eth0" Jul 2 06:54:27.393698 containerd[1272]: 2024-07-02 06:54:27.375 [INFO][4915] k8s.go 386: Populated endpoint ContainerID="e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" Namespace="calico-apiserver" Pod="calico-apiserver-59bd8874df-dbftp" WorkloadEndpoint="localhost-k8s-calico--apiserver--59bd8874df--dbftp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59bd8874df--dbftp-eth0", GenerateName:"calico-apiserver-59bd8874df-", Namespace:"calico-apiserver", SelfLink:"", UID:"726c442d-5dcf-409e-af08-447e2853be61", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59bd8874df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59bd8874df-dbftp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali127f1f2309b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:27.393698 containerd[1272]: 2024-07-02 06:54:27.375 [INFO][4915] k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" Namespace="calico-apiserver" Pod="calico-apiserver-59bd8874df-dbftp" WorkloadEndpoint="localhost-k8s-calico--apiserver--59bd8874df--dbftp-eth0" Jul 2 06:54:27.393698 containerd[1272]: 2024-07-02 06:54:27.375 [INFO][4915] dataplane_linux.go 68: Setting the host side veth name to cali127f1f2309b ContainerID="e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" Namespace="calico-apiserver" Pod="calico-apiserver-59bd8874df-dbftp" WorkloadEndpoint="localhost-k8s-calico--apiserver--59bd8874df--dbftp-eth0" Jul 2 06:54:27.393698 containerd[1272]: 2024-07-02 06:54:27.380 [INFO][4915] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" Namespace="calico-apiserver" Pod="calico-apiserver-59bd8874df-dbftp" WorkloadEndpoint="localhost-k8s-calico--apiserver--59bd8874df--dbftp-eth0" Jul 2 06:54:27.393698 containerd[1272]: 2024-07-02 06:54:27.381 [INFO][4915] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" Namespace="calico-apiserver" Pod="calico-apiserver-59bd8874df-dbftp" WorkloadEndpoint="localhost-k8s-calico--apiserver--59bd8874df--dbftp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59bd8874df--dbftp-eth0", GenerateName:"calico-apiserver-59bd8874df-", Namespace:"calico-apiserver", SelfLink:"", UID:"726c442d-5dcf-409e-af08-447e2853be61", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 6, 54, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59bd8874df", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb", Pod:"calico-apiserver-59bd8874df-dbftp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali127f1f2309b", MAC:"52:77:a9:22:10:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 06:54:27.393698 containerd[1272]: 2024-07-02 06:54:27.390 [INFO][4915] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb" Namespace="calico-apiserver" Pod="calico-apiserver-59bd8874df-dbftp" WorkloadEndpoint="localhost-k8s-calico--apiserver--59bd8874df--dbftp-eth0" Jul 2 06:54:27.407000 audit[4953]: NETFILTER_CFG table=filter:115 family=2 entries=55 op=nft_register_chain pid=4953 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 2 06:54:27.407000 audit[4953]: SYSCALL arch=c000003e syscall=46 success=yes exit=27464 a0=3 a1=7ffdd48f73f0 a2=0 a3=7ffdd48f73dc items=0 ppid=3652 pid=4953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:27.407000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 2 06:54:27.414615 containerd[1272]: time="2024-07-02T06:54:27.414492685Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 06:54:27.414830 containerd[1272]: time="2024-07-02T06:54:27.414632951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:54:27.414830 containerd[1272]: time="2024-07-02T06:54:27.414682164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 06:54:27.414830 containerd[1272]: time="2024-07-02T06:54:27.414701671Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 06:54:27.446043 systemd[1]: Started cri-containerd-e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb.scope - libcontainer container e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb. Jul 2 06:54:27.456000 audit: BPF prog-id=180 op=LOAD Jul 2 06:54:27.456000 audit: BPF prog-id=181 op=LOAD Jul 2 06:54:27.456000 audit[4971]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4960 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:27.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539373130613865373636373665363935383337393562366235613839 Jul 2 06:54:27.456000 audit: BPF prog-id=182 op=LOAD Jul 2 06:54:27.456000 audit[4971]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4960 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:27.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539373130613865373636373665363935383337393562366235613839 Jul 2 06:54:27.456000 audit: BPF prog-id=182 op=UNLOAD Jul 2 06:54:27.456000 audit: BPF prog-id=181 op=UNLOAD Jul 2 06:54:27.456000 audit: BPF prog-id=183 op=LOAD Jul 2 06:54:27.456000 audit[4971]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4960 pid=4971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:27.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539373130613865373636373665363935383337393562366235613839 Jul 2 06:54:27.459588 systemd-resolved[1215]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 2 06:54:27.492725 containerd[1272]: time="2024-07-02T06:54:27.492586802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59bd8874df-dbftp,Uid:726c442d-5dcf-409e-af08-447e2853be61,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb\"" Jul 2 06:54:27.494592 containerd[1272]: time="2024-07-02T06:54:27.494546084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 06:54:29.266011 systemd-networkd[1104]: cali127f1f2309b: Gained IPv6LL Jul 2 06:54:30.661280 containerd[1272]: time="2024-07-02T06:54:30.661219936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:30.707762 containerd[1272]: time="2024-07-02T06:54:30.707694088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 06:54:30.732536 containerd[1272]: time="2024-07-02T06:54:30.732492691Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:30.784762 containerd[1272]: time="2024-07-02T06:54:30.784718883Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:30.838673 containerd[1272]: time="2024-07-02T06:54:30.838621790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 06:54:30.839386 containerd[1272]: time="2024-07-02T06:54:30.839349738Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 3.344742709s" Jul 2 06:54:30.839442 containerd[1272]: time="2024-07-02T06:54:30.839390065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 06:54:30.840909 containerd[1272]: time="2024-07-02T06:54:30.840873754Z" level=info msg="CreateContainer within sandbox \"e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 06:54:31.020443 containerd[1272]: time="2024-07-02T06:54:31.020303330Z" level=info msg="CreateContainer within sandbox \"e9710a8e76676e69583795b6b5a897a7e4cb002bb71c3ed6daf9fefc15b8e0eb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0c1f0d7fc83e237e8af0ddea8541afefab228bdaf957a843a0cff2cd5ff09991\"" Jul 2 06:54:31.021128 containerd[1272]: time="2024-07-02T06:54:31.021103875Z" level=info msg="StartContainer for \"0c1f0d7fc83e237e8af0ddea8541afefab228bdaf957a843a0cff2cd5ff09991\"" Jul 2 06:54:31.050195 systemd[1]: run-containerd-runc-k8s.io-0c1f0d7fc83e237e8af0ddea8541afefab228bdaf957a843a0cff2cd5ff09991-runc.u7UaJ8.mount: Deactivated successfully. Jul 2 06:54:31.070915 systemd[1]: Started cri-containerd-0c1f0d7fc83e237e8af0ddea8541afefab228bdaf957a843a0cff2cd5ff09991.scope - libcontainer container 0c1f0d7fc83e237e8af0ddea8541afefab228bdaf957a843a0cff2cd5ff09991. Jul 2 06:54:31.079000 audit: BPF prog-id=184 op=LOAD Jul 2 06:54:31.080000 audit: BPF prog-id=185 op=LOAD Jul 2 06:54:31.080000 audit[5010]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018d988 a2=78 a3=0 items=0 ppid=4960 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:31.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316630643766633833653233376538616630646465613835343161 Jul 2 06:54:31.080000 audit: BPF prog-id=186 op=LOAD Jul 2 06:54:31.080000 audit[5010]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00018d720 a2=78 a3=0 items=0 ppid=4960 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:31.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316630643766633833653233376538616630646465613835343161 Jul 2 06:54:31.080000 audit: BPF prog-id=186 op=UNLOAD Jul 2 06:54:31.080000 audit: BPF prog-id=185 op=UNLOAD Jul 2 06:54:31.080000 audit: BPF prog-id=187 op=LOAD Jul 2 06:54:31.080000 audit[5010]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00018dbe0 a2=78 a3=0 items=0 ppid=4960 pid=5010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:31.080000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3063316630643766633833653233376538616630646465613835343161 Jul 2 06:54:31.153237 containerd[1272]: time="2024-07-02T06:54:31.153183180Z" level=info msg="StartContainer for \"0c1f0d7fc83e237e8af0ddea8541afefab228bdaf957a843a0cff2cd5ff09991\" returns successfully" Jul 2 06:54:31.470032 systemd[1]: Started sshd@18-10.0.0.35:22-10.0.0.1:47534.service - OpenSSH per-connection server daemon (10.0.0.1:47534). Jul 2 06:54:31.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.35:22-10.0.0.1:47534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:31.473722 kernel: kauditd_printk_skb: 40 callbacks suppressed Jul 2 06:54:31.473801 kernel: audit: type=1130 audit(1719903271.468:728): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.35:22-10.0.0.1:47534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:31.514000 audit[5043]: USER_ACCT pid=5043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:31.516626 sshd[5043]: Accepted publickey for core from 10.0.0.1 port 47534 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:54:31.518176 sshd[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:31.522767 kernel: audit: type=1101 audit(1719903271.514:729): pid=5043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:31.522834 systemd-logind[1264]: New session 19 of user core. Jul 2 06:54:31.564236 kernel: audit: type=1103 audit(1719903271.515:730): pid=5043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:31.564270 kernel: audit: type=1006 audit(1719903271.515:731): pid=5043 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jul 2 06:54:31.564287 kernel: audit: type=1300 audit(1719903271.515:731): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd1c8a340 a2=3 a3=7fd655d99480 items=0 ppid=1 pid=5043 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:31.564310 kernel: audit: type=1327 audit(1719903271.515:731): proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:31.515000 audit[5043]: CRED_ACQ pid=5043 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:31.515000 audit[5043]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdd1c8a340 a2=3 a3=7fd655d99480 items=0 ppid=1 pid=5043 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:31.515000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:31.564156 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 06:54:31.568000 audit[5043]: USER_START pid=5043 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:31.569000 audit[5045]: CRED_ACQ pid=5045 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:31.577975 kernel: audit: type=1105 audit(1719903271.568:732): pid=5043 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:31.578062 kernel: audit: type=1103 audit(1719903271.569:733): pid=5045 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:31.580004 kubelet[2344]: I0702 06:54:31.579968 2344 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-59bd8874df-dbftp" podStartSLOduration=2.234360104 podStartE2EDuration="5.579936592s" podCreationTimestamp="2024-07-02 06:54:26 +0000 UTC" firstStartedPulling="2024-07-02 06:54:27.494068829 +0000 UTC m=+75.365870051" lastFinishedPulling="2024-07-02 06:54:30.839645307 +0000 UTC m=+78.711446539" observedRunningTime="2024-07-02 06:54:31.578726542 +0000 UTC m=+79.450527774" watchObservedRunningTime="2024-07-02 06:54:31.579936592 +0000 UTC m=+79.451737814" Jul 2 06:54:31.828000 audit[5055]: NETFILTER_CFG table=filter:116 family=2 entries=10 op=nft_register_rule pid=5055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:31.828000 audit[5055]: SYSCALL arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd5a8f8d60 a2=0 a3=7ffd5a8f8d4c items=0 ppid=2535 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:31.833873 sshd[5043]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:31.837635 kernel: audit: type=1325 audit(1719903271.828:734): table=filter:116 family=2 entries=10 op=nft_register_rule pid=5055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:31.837731 kernel: audit: type=1300 audit(1719903271.828:734): arch=c000003e syscall=46 success=yes exit=3676 a0=3 a1=7ffd5a8f8d60 a2=0 a3=7ffd5a8f8d4c items=0 ppid=2535 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:31.828000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:31.836000 audit[5043]: USER_END pid=5043 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:31.836000 audit[5043]: CRED_DISP pid=5043 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:31.829000 audit[5055]: NETFILTER_CFG table=nat:117 family=2 entries=20 op=nft_register_rule pid=5055 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:31.829000 audit[5055]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffd5a8f8d60 a2=0 a3=7ffd5a8f8d4c items=0 ppid=2535 pid=5055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:31.829000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:31.843016 systemd[1]: sshd@18-10.0.0.35:22-10.0.0.1:47534.service: Deactivated successfully. Jul 2 06:54:31.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.35:22-10.0.0.1:47534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:31.843565 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 06:54:31.844271 systemd-logind[1264]: Session 19 logged out. Waiting for processes to exit. Jul 2 06:54:31.849481 systemd[1]: Started sshd@19-10.0.0.35:22-10.0.0.1:47548.service - OpenSSH per-connection server daemon (10.0.0.1:47548). Jul 2 06:54:31.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.35:22-10.0.0.1:47548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:31.850397 systemd-logind[1264]: Removed session 19. Jul 2 06:54:31.879000 audit[5058]: USER_ACCT pid=5058 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:31.881486 sshd[5058]: Accepted publickey for core from 10.0.0.1 port 47548 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:54:31.880000 audit[5058]: CRED_ACQ pid=5058 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:31.880000 audit[5058]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe4efb2150 a2=3 a3=7f57e06a4480 items=0 ppid=1 pid=5058 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:31.880000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:31.882527 sshd[5058]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:31.885857 systemd-logind[1264]: New session 20 of user core. Jul 2 06:54:31.891982 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 06:54:31.894000 audit[5058]: USER_START pid=5058 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:31.895000 audit[5060]: CRED_ACQ pid=5060 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:32.219777 kubelet[2344]: E0702 06:54:32.219642 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:54:32.724000 audit[5072]: NETFILTER_CFG table=filter:118 family=2 entries=9 op=nft_register_rule pid=5072 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:32.724000 audit[5072]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffc53fa3650 a2=0 a3=7ffc53fa363c items=0 ppid=2535 pid=5072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:32.724000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:32.725000 audit[5072]: NETFILTER_CFG table=nat:119 family=2 entries=27 op=nft_register_chain pid=5072 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:32.725000 audit[5072]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffc53fa3650 a2=0 a3=7ffc53fa363c items=0 ppid=2535 pid=5072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:32.725000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:33.153310 sshd[5058]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:33.153000 audit[5058]: USER_END pid=5058 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:33.154000 audit[5058]: CRED_DISP pid=5058 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:33.162026 systemd[1]: sshd@19-10.0.0.35:22-10.0.0.1:47548.service: Deactivated successfully. Jul 2 06:54:33.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.35:22-10.0.0.1:47548 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:33.162579 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 06:54:33.163112 systemd-logind[1264]: Session 20 logged out. Waiting for processes to exit. Jul 2 06:54:33.164416 systemd[1]: Started sshd@20-10.0.0.35:22-10.0.0.1:54910.service - OpenSSH per-connection server daemon (10.0.0.1:54910). Jul 2 06:54:33.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.35:22-10.0.0.1:54910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:33.165110 systemd-logind[1264]: Removed session 20. Jul 2 06:54:33.195000 audit[5080]: USER_ACCT pid=5080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:33.196730 sshd[5080]: Accepted publickey for core from 10.0.0.1 port 54910 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:54:33.196000 audit[5080]: CRED_ACQ pid=5080 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:33.196000 audit[5080]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc5f759990 a2=3 a3=7fdb29445480 items=0 ppid=1 pid=5080 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:33.196000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:33.197986 sshd[5080]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:33.201461 systemd-logind[1264]: New session 21 of user core. Jul 2 06:54:33.205915 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 06:54:33.209000 audit[5080]: USER_START pid=5080 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:33.211000 audit[5082]: CRED_ACQ pid=5082 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:34.511000 audit[5111]: NETFILTER_CFG table=filter:120 family=2 entries=8 op=nft_register_rule pid=5111 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:34.511000 audit[5111]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffcd43fd650 a2=0 a3=7ffcd43fd63c items=0 ppid=2535 pid=5111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:34.511000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:34.512000 audit[5111]: NETFILTER_CFG table=nat:121 family=2 entries=30 op=nft_register_rule pid=5111 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:34.512000 audit[5111]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffcd43fd650 a2=0 a3=7ffcd43fd63c items=0 ppid=2535 pid=5111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:34.512000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:34.895000 audit[5113]: NETFILTER_CFG table=filter:122 family=2 entries=20 op=nft_register_rule pid=5113 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:34.895000 audit[5113]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffce8a091d0 a2=0 a3=7ffce8a091bc items=0 ppid=2535 pid=5113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:34.895000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:34.896000 audit[5113]: NETFILTER_CFG table=nat:123 family=2 entries=22 op=nft_register_rule pid=5113 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:34.896000 audit[5113]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffce8a091d0 a2=0 a3=0 items=0 ppid=2535 pid=5113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:34.896000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:34.903249 sshd[5080]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:34.903000 audit[5080]: USER_END pid=5080 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:34.903000 audit[5080]: CRED_DISP pid=5080 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:34.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.35:22-10.0.0.1:54910 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:34.910179 systemd[1]: sshd@20-10.0.0.35:22-10.0.0.1:54910.service: Deactivated successfully. Jul 2 06:54:34.910759 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 06:54:34.912403 systemd-logind[1264]: Session 21 logged out. Waiting for processes to exit. Jul 2 06:54:34.916133 systemd[1]: Started sshd@21-10.0.0.35:22-10.0.0.1:54914.service - OpenSSH per-connection server daemon (10.0.0.1:54914). Jul 2 06:54:34.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.35:22-10.0.0.1:54914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:34.917190 systemd-logind[1264]: Removed session 21. Jul 2 06:54:34.949000 audit[5116]: USER_ACCT pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:34.950442 sshd[5116]: Accepted publickey for core from 10.0.0.1 port 54914 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:54:34.950000 audit[5116]: CRED_ACQ pid=5116 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:34.950000 audit[5116]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd0f4d7030 a2=3 a3=7f5c27672480 items=0 ppid=1 pid=5116 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:34.950000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:34.951464 sshd[5116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:34.955408 systemd-logind[1264]: New session 22 of user core. Jul 2 06:54:34.960992 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 06:54:34.965000 audit[5116]: USER_START pid=5116 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:34.966000 audit[5118]: CRED_ACQ pid=5118 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:35.104900 systemd[1]: run-containerd-runc-k8s.io-1312c2c46e61b81e135ed1cf9cf961f2ac8c84cb932cd8b612ee7cf24c8fb322-runc.P9bePD.mount: Deactivated successfully. Jul 2 06:54:35.189264 sshd[5116]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:35.190000 audit[5116]: USER_END pid=5116 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:35.190000 audit[5116]: CRED_DISP pid=5116 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:35.197511 systemd[1]: sshd@21-10.0.0.35:22-10.0.0.1:54914.service: Deactivated successfully. Jul 2 06:54:35.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.35:22-10.0.0.1:54914 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:35.198323 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 06:54:35.199164 systemd-logind[1264]: Session 22 logged out. Waiting for processes to exit. Jul 2 06:54:35.205201 systemd[1]: Started sshd@22-10.0.0.35:22-10.0.0.1:54930.service - OpenSSH per-connection server daemon (10.0.0.1:54930). Jul 2 06:54:35.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.35:22-10.0.0.1:54930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:35.206231 systemd-logind[1264]: Removed session 22. Jul 2 06:54:35.238000 audit[5149]: USER_ACCT pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:35.239227 sshd[5149]: Accepted publickey for core from 10.0.0.1 port 54930 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:54:35.239000 audit[5149]: CRED_ACQ pid=5149 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:35.239000 audit[5149]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd0b6baf0 a2=3 a3=7f22e3ae1480 items=0 ppid=1 pid=5149 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:35.239000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:35.240646 sshd[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:35.243874 systemd-logind[1264]: New session 23 of user core. Jul 2 06:54:35.252925 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 2 06:54:35.256000 audit[5149]: USER_START pid=5149 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:35.258000 audit[5151]: CRED_ACQ pid=5151 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:35.370349 sshd[5149]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:35.370000 audit[5149]: USER_END pid=5149 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:35.370000 audit[5149]: CRED_DISP pid=5149 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:35.373234 systemd[1]: sshd@22-10.0.0.35:22-10.0.0.1:54930.service: Deactivated successfully. Jul 2 06:54:35.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.35:22-10.0.0.1:54930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:35.374068 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 06:54:35.374638 systemd-logind[1264]: Session 23 logged out. Waiting for processes to exit. Jul 2 06:54:35.375360 systemd-logind[1264]: Removed session 23. Jul 2 06:54:35.908000 audit[5162]: NETFILTER_CFG table=filter:124 family=2 entries=32 op=nft_register_rule pid=5162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:35.908000 audit[5162]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7ffd4dc2bc00 a2=0 a3=7ffd4dc2bbec items=0 ppid=2535 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:35.908000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:35.909000 audit[5162]: NETFILTER_CFG table=nat:125 family=2 entries=22 op=nft_register_rule pid=5162 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:35.909000 audit[5162]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffd4dc2bc00 a2=0 a3=0 items=0 ppid=2535 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:35.909000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:36.921000 audit[5164]: NETFILTER_CFG table=filter:126 family=2 entries=32 op=nft_register_rule pid=5164 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:36.923405 kernel: kauditd_printk_skb: 75 callbacks suppressed Jul 2 06:54:36.923476 kernel: audit: type=1325 audit(1719903276.921:783): table=filter:126 family=2 entries=32 op=nft_register_rule pid=5164 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:36.921000 audit[5164]: SYSCALL arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff7c3012b0 a2=0 a3=7fff7c30129c items=0 ppid=2535 pid=5164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:36.929226 kernel: audit: type=1300 audit(1719903276.921:783): arch=c000003e syscall=46 success=yes exit=11860 a0=3 a1=7fff7c3012b0 a2=0 a3=7fff7c30129c items=0 ppid=2535 pid=5164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:36.929292 kernel: audit: type=1327 audit(1719903276.921:783): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:36.921000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:36.922000 audit[5164]: NETFILTER_CFG table=nat:127 family=2 entries=34 op=nft_register_chain pid=5164 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:36.922000 audit[5164]: SYSCALL arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7fff7c3012b0 a2=0 a3=7fff7c30129c items=0 ppid=2535 pid=5164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:36.938384 kernel: audit: type=1325 audit(1719903276.922:784): table=nat:127 family=2 entries=34 op=nft_register_chain pid=5164 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:36.938450 kernel: audit: type=1300 audit(1719903276.922:784): arch=c000003e syscall=46 success=yes exit=11236 a0=3 a1=7fff7c3012b0 a2=0 a3=7fff7c30129c items=0 ppid=2535 pid=5164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:36.938483 kernel: audit: type=1327 audit(1719903276.922:784): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:36.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:39.522000 audit[5168]: NETFILTER_CFG table=filter:128 family=2 entries=20 op=nft_register_rule pid=5168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:39.522000 audit[5168]: SYSCALL arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffcfb961ac0 a2=0 a3=7ffcfb961aac items=0 ppid=2535 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.528693 kernel: audit: type=1325 audit(1719903279.522:785): table=filter:128 family=2 entries=20 op=nft_register_rule pid=5168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:39.528749 kernel: audit: type=1300 audit(1719903279.522:785): arch=c000003e syscall=46 success=yes exit=2932 a0=3 a1=7ffcfb961ac0 a2=0 a3=7ffcfb961aac items=0 ppid=2535 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.528777 kernel: audit: type=1327 audit(1719903279.522:785): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:39.522000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:39.523000 audit[5168]: NETFILTER_CFG table=nat:129 family=2 entries=106 op=nft_register_chain pid=5168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:39.523000 audit[5168]: SYSCALL arch=c000003e syscall=46 success=yes exit=49452 a0=3 a1=7ffcfb961ac0 a2=0 a3=7ffcfb961aac items=0 ppid=2535 pid=5168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:39.523000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 2 06:54:39.535832 kernel: audit: type=1325 audit(1719903279.523:786): table=nat:129 family=2 entries=106 op=nft_register_chain pid=5168 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 2 06:54:40.380956 systemd[1]: Started sshd@23-10.0.0.35:22-10.0.0.1:54940.service - OpenSSH per-connection server daemon (10.0.0.1:54940). Jul 2 06:54:40.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.35:22-10.0.0.1:54940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:40.411000 audit[5171]: USER_ACCT pid=5171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:40.412938 sshd[5171]: Accepted publickey for core from 10.0.0.1 port 54940 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:54:40.413000 audit[5171]: CRED_ACQ pid=5171 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:40.413000 audit[5171]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffddcc11620 a2=3 a3=7f67fedf7480 items=0 ppid=1 pid=5171 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:40.413000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:40.414334 sshd[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:40.419856 systemd-logind[1264]: New session 24 of user core. Jul 2 06:54:40.423971 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 2 06:54:40.428000 audit[5171]: USER_START pid=5171 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:40.430000 audit[5173]: CRED_ACQ pid=5173 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:40.529751 sshd[5171]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:40.530000 audit[5171]: USER_END pid=5171 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:40.530000 audit[5171]: CRED_DISP pid=5171 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:40.532568 systemd[1]: sshd@23-10.0.0.35:22-10.0.0.1:54940.service: Deactivated successfully. Jul 2 06:54:40.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.35:22-10.0.0.1:54940 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:40.533341 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 06:54:40.533909 systemd-logind[1264]: Session 24 logged out. Waiting for processes to exit. Jul 2 06:54:40.534569 systemd-logind[1264]: Removed session 24. Jul 2 06:54:45.541938 systemd[1]: Started sshd@24-10.0.0.35:22-10.0.0.1:49492.service - OpenSSH per-connection server daemon (10.0.0.1:49492). Jul 2 06:54:45.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.35:22-10.0.0.1:49492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:45.543303 kernel: kauditd_printk_skb: 13 callbacks suppressed Jul 2 06:54:45.543533 kernel: audit: type=1130 audit(1719903285.541:796): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.35:22-10.0.0.1:49492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:45.575000 audit[5195]: USER_ACCT pid=5195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:45.576733 sshd[5195]: Accepted publickey for core from 10.0.0.1 port 49492 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:54:45.578095 sshd[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:45.576000 audit[5195]: CRED_ACQ pid=5195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:45.582609 kernel: audit: type=1101 audit(1719903285.575:797): pid=5195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:45.582700 kernel: audit: type=1103 audit(1719903285.576:798): pid=5195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:45.582731 kernel: audit: type=1006 audit(1719903285.576:799): pid=5195 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jul 2 06:54:45.582200 systemd-logind[1264]: New session 25 of user core. Jul 2 06:54:45.576000 audit[5195]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdba0cbfa0 a2=3 a3=7f57aed4b480 items=0 ppid=1 pid=5195 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:45.587576 kernel: audit: type=1300 audit(1719903285.576:799): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdba0cbfa0 a2=3 a3=7f57aed4b480 items=0 ppid=1 pid=5195 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:45.587658 kernel: audit: type=1327 audit(1719903285.576:799): proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:45.576000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:45.594137 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 2 06:54:45.598000 audit[5195]: USER_START pid=5195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:45.599000 audit[5197]: CRED_ACQ pid=5197 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:45.605120 kernel: audit: type=1105 audit(1719903285.598:800): pid=5195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:45.605159 kernel: audit: type=1103 audit(1719903285.599:801): pid=5197 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:45.702911 sshd[5195]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:45.702000 audit[5195]: USER_END pid=5195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:45.705380 systemd[1]: sshd@24-10.0.0.35:22-10.0.0.1:49492.service: Deactivated successfully. Jul 2 06:54:45.706182 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 06:54:45.706711 systemd-logind[1264]: Session 25 logged out. Waiting for processes to exit. Jul 2 06:54:45.707468 systemd-logind[1264]: Removed session 25. Jul 2 06:54:45.703000 audit[5195]: CRED_DISP pid=5195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:45.710244 kernel: audit: type=1106 audit(1719903285.702:802): pid=5195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:45.710286 kernel: audit: type=1104 audit(1719903285.703:803): pid=5195 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:45.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.0.0.35:22-10.0.0.1:49492 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:49.218805 kubelet[2344]: E0702 06:54:49.218743 2344 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 2 06:54:50.712918 systemd[1]: Started sshd@25-10.0.0.35:22-10.0.0.1:49494.service - OpenSSH per-connection server daemon (10.0.0.1:49494). Jul 2 06:54:50.714181 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 06:54:50.714253 kernel: audit: type=1130 audit(1719903290.712:805): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.35:22-10.0.0.1:49494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:50.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.35:22-10.0.0.1:49494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:50.743000 audit[5229]: USER_ACCT pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:50.745000 audit[5229]: CRED_ACQ pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:50.758322 sshd[5229]: Accepted publickey for core from 10.0.0.1 port 49494 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:54:50.773053 kernel: audit: type=1101 audit(1719903290.743:806): pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:50.773079 kernel: audit: type=1103 audit(1719903290.745:807): pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:50.773100 kernel: audit: type=1006 audit(1719903290.745:808): pid=5229 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jul 2 06:54:50.773117 kernel: audit: type=1300 audit(1719903290.745:808): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcdaa65c00 a2=3 a3=7f240f80b480 items=0 ppid=1 pid=5229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:50.773133 kernel: audit: type=1327 audit(1719903290.745:808): proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:50.745000 audit[5229]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcdaa65c00 a2=3 a3=7f240f80b480 items=0 ppid=1 pid=5229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:50.745000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:50.746328 sshd[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:50.750120 systemd-logind[1264]: New session 26 of user core. Jul 2 06:54:50.773023 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 2 06:54:50.776000 audit[5229]: USER_START pid=5229 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:50.776000 audit[5231]: CRED_ACQ pid=5231 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:50.784623 kernel: audit: type=1105 audit(1719903290.776:809): pid=5229 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:50.784669 kernel: audit: type=1103 audit(1719903290.776:810): pid=5231 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:50.876234 sshd[5229]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:50.876000 audit[5229]: USER_END pid=5229 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:50.878528 systemd[1]: sshd@25-10.0.0.35:22-10.0.0.1:49494.service: Deactivated successfully. Jul 2 06:54:50.879417 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 06:54:50.879941 systemd-logind[1264]: Session 26 logged out. Waiting for processes to exit. Jul 2 06:54:50.876000 audit[5229]: CRED_DISP pid=5229 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:50.880733 systemd-logind[1264]: Removed session 26. Jul 2 06:54:50.894151 kernel: audit: type=1106 audit(1719903290.876:811): pid=5229 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:50.894209 kernel: audit: type=1104 audit(1719903290.876:812): pid=5229 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:50.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-10.0.0.35:22-10.0.0.1:49494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:55.891765 systemd[1]: Started sshd@26-10.0.0.35:22-10.0.0.1:39668.service - OpenSSH per-connection server daemon (10.0.0.1:39668). Jul 2 06:54:55.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.35:22-10.0.0.1:39668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:55.895261 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 2 06:54:55.895328 kernel: audit: type=1130 audit(1719903295.891:814): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.35:22-10.0.0.1:39668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 06:54:55.923000 audit[5247]: USER_ACCT pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:55.924483 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 39668 ssh2: RSA SHA256:9n9RhOPT7vIcCRgLwf+QUbEnLol33spbTO+31IUDq6w Jul 2 06:54:55.927835 kernel: audit: type=1101 audit(1719903295.923:815): pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:55.928034 kernel: audit: type=1103 audit(1719903295.927:816): pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:55.927000 audit[5247]: CRED_ACQ pid=5247 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:55.928364 sshd[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 06:54:55.932500 systemd-logind[1264]: New session 27 of user core. Jul 2 06:54:55.933172 kernel: audit: type=1006 audit(1719903295.927:817): pid=5247 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jul 2 06:54:55.933217 kernel: audit: type=1300 audit(1719903295.927:817): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff73a417c0 a2=3 a3=7f73104e0480 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:55.927000 audit[5247]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff73a417c0 a2=3 a3=7f73104e0480 items=0 ppid=1 pid=5247 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 06:54:55.927000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:55.937614 kernel: audit: type=1327 audit(1719903295.927:817): proctitle=737368643A20636F7265205B707269765D Jul 2 06:54:55.946971 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 2 06:54:55.951000 audit[5247]: USER_START pid=5247 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:55.952000 audit[5249]: CRED_ACQ pid=5249 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:55.958351 kernel: audit: type=1105 audit(1719903295.951:818): pid=5247 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:55.958407 kernel: audit: type=1103 audit(1719903295.952:819): pid=5249 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:56.052119 sshd[5247]: pam_unix(sshd:session): session closed for user core Jul 2 06:54:56.052000 audit[5247]: USER_END pid=5247 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:56.055116 systemd[1]: sshd@26-10.0.0.35:22-10.0.0.1:39668.service: Deactivated successfully. Jul 2 06:54:56.055972 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 06:54:56.056506 systemd-logind[1264]: Session 27 logged out. Waiting for processes to exit. Jul 2 06:54:56.052000 audit[5247]: CRED_DISP pid=5247 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:56.057470 systemd-logind[1264]: Removed session 27. Jul 2 06:54:56.059460 kernel: audit: type=1106 audit(1719903296.052:820): pid=5247 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:56.059533 kernel: audit: type=1104 audit(1719903296.052:821): pid=5247 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Jul 2 06:54:56.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-10.0.0.35:22-10.0.0.1:39668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'